Active Governance in Practice
Why Documentation Alone Will Not Get You to AI Success
Executive Summary
The market is overloaded with information, frameworks, and advice about AI success. Every vendor promises a solution. Every framework claims to be comprehensive. Every consultant offers a roadmap. But amid all this noise, one critical truth keeps getting obscured: governance must operate at runtime if AI is to succeed. Cameron Price's original blog cuts through the complexity with precision and clarity. His argument, that knowing data exists is not governance, challenges the fundamental assumptions many organizations are operating under. This blog now aims to reinforce and operationalize Cameron's argument for those implementing governance and AI in practice. It examines real-world failures where governance existed but execution failed. It addresses the uncomfortable gap between governance documentation and runtime enforcement. And it provides a clear path forward for organizations serious about activating AI at scale. The central thesis is straightforward: documentation-based governance cannot protect AI systems that operate in real time. If your governance ends in a handover to engineering teams, it is not active. If your policies cannot execute automatically at the point of data access, combination, or model training, they are theoretical. And theoretical governance will not get you to AI success.
Active Governance in Practice: Why Documentation Alone Will Not Get You to AI Success
When Cameron published his latest blog on Active Governance, one line stayed with me:
Knowing data exists is not governance.
From a customer and partnerships perspective, I see this play out. Business teams are being told that governance is the foundation for AI. And that's true.
But many are being led to believe that implementing a catalog, defining ownership, and documenting policy is enough.
It isn't.
The Core Challenge
Organizations invest millions in governance frameworks, catalogs, and documentation, yet AI initiatives still fail at scale. The disconnect isn't about governance quality. It's about governance execution.

Documentation describes what should happen. Runtime enforcement ensures it actually does.
The Governance Promise vs The AI Reality
Across industries, governance has been positioned as the safeguard that makes AI safe.
But governance that lives in documentation does not operate at runtime.
And AI operates only at runtime.
When models are trained, when data is combined, when automation triggers actions, governance must execute in that moment.
Otherwise, the control is theoretical.
This gap has shown up publicly.
Documentation Layer
Policies written in documents, stored in repositories, reviewed in meetings, handed off to implementation teams
Execution Gap
Time delay between policy and implementation, manual interpretation, inconsistent enforcement, human error
Runtime Reality
Systems operate continuously, data moves automatically, models execute decisions, controls must enforce instantly
When Governance Exists but Execution Fails
The pattern across these failures is consistent and undeniable. In each case, governance structures existed. Policies were documented. Controls were defined. Ownership was assigned. Yet when systems executed in production environments, governance failed to operate. The failures were not about missing documentation, they were about absent runtime enforcement.
Target Canada
Target's failed expansion into Canada was widely analyzed as a breakdown in operational data reliability. Inventory systems reported products as available when shelves were empty. Governance structures existed, yet the data could not be trusted in execution.
Harvard Business Review and other analyses pointed to systemic data and systems failures, not a lack of documentation.
Documentation did not prevent operational collapse.
The Failure
$2 billion loss, complete market exit after two years
The Cause
Data quality failures in production systems despite documented standards
The Lesson
Governance must execute at the point of data use, not just in policy documents
Knight Capital
In 2012, Knight Capital lost $440 million in 45 minutes due to a faulty software deployment that triggered unintended trading activity. Governance and risk policies existed. Controls existed. But enforcement at runtime failed.
The U.S. Securities and Exchange Commission later cited weak production controls and execution failures.
The failure was not documentation. It was execution.
$440M
Lost in 45 Minutes
Automated trading system executed without proper runtime controls
45
Minutes to Collapse
Speed of automated systems demands instant governance enforcement
Banking and Risk Reporting
The Basel Committee on Banking Supervision, in its Principles for Effective Risk Data Aggregation and Risk Reporting, makes clear that governance must be effective in practice and resilient under stress.
The European Central Bank has repeatedly identified gaps where governance frameworks exist on paper, yet data quality and risk reporting fail in operational conditions.
When governance cannot execute in real conditions, AI and automated decision systems cannot be trusted.
BCBS 239 Principles
Emphasize effectiveness in practice, not just documentation completeness
ECB Findings
Repeated identification of operational failures despite documented frameworks
Regulatory Direction
Clear signal: governance must operate under stress, not just in controlled conditions
The Adoption Gap
Research reinforces this pattern.
McKinsey's State of AI reports consistently show that fewer than 30 percent of AI initiatives achieve sustained business value at scale.
Deloitte has highlighted that many organizations invest in governance frameworks but struggle to operationalize them in ways that drive measurable outcomes.
Collibra's thought leadership increasingly emphasizes trusted data usage, lifecycle management, and governance that spans both data and AI assets, not simply catalog visibility.
The pattern is consistent.
Organizations are investing in governance.
But they are not activating it.
1
AI Success Rate
Fewer than 30% of AI initiatives achieve sustained value at scale
2
Implementation Gap
Most organizations struggle to operationalize governance frameworks effectively
The Market Is Still Debating This
Despite widespread agreement on the need for AI governance, there's a significant divergence in understanding and implementation. This creates confusion and directly contributes to the adoption gap, with organizations struggling to understand what 'active governance' truly means in practice.
Two Schools of Thought
Two main schools of thought prevail in how organizations approach AI governance:

The debate isn't whether to have governance, but whether it truly executes or merely exists on paper.
Recently, I saw a discussion on LinkedIn arguing that governance is already active, and that conversations about runtime enforcement were overstated.
I understand the sensitivity.
Organizations have invested heavily in governance programs. It is uncomfortable to suggest they are incomplete.
But we have to be clear about what "active" really means.
Running agile sprints. Delivering engineering increments. Handing governance requirements to technical teams to implement.
That is not activation.
Anything that ends in a handover is not active governance.
What Active Governance Is NOT
  • Agile delivery of governance requirements
  • Incremental engineering sprints
  • Handing policies to technical teams
  • Documentation in modern formats
  • Faster review cycles

Speed of delivery does not equal runtime enforcement.
Agility in delivery does not automatically mean governance operates at runtime.
Activation is not about how fast engineers build.
It is about whether policy executes automatically when data is accessed, combined, reused, or exposed to AI systems.
Leading Authorities Are Aligned
Leading authorities are aligned on this direction.
The Basel Committee emphasizes effectiveness in practice.
The European Central Bank stresses operational resilience.
McKinsey identifies organizational and operating model gaps as barriers to AI scale.
IBM highlights trust as foundational to AI adoption.
AWS governance guidance focuses on embedded guardrails, not manual review cycles.
Across regulators, consultants, and platforms, the signal is clear:
Governance cannot remain a collection of rules and documents handed over for downstream implementation.
It must operate at runtime.
Active governance is not a sprint milestone.
It is an execution model.
This progression represents a fundamental shift in how governance functions within the enterprise technology stack.
The Shift Toward Data Products
Governance platforms themselves are evolving.
Collibra's recent positioning highlights unified governance across data and AI lifecycles, with increasing emphasis on lifecycle management of trusted data products from creation to consumption. The direction is clear: governance must support not just assets, but usable, trusted data products that can scale across the enterprise.
This reinforces Cameron's argument.
The future of governance is not static artifacts. It is governed data products that can be confidently used, shared, and activated.
But lifecycle visibility alone is not enough.
Governance must execute within those products.
Traditional Governance
Focus on cataloging assets, documenting lineage, defining ownership. Visibility without enforcement.
Product-Based Governance
Governed data products with embedded controls, runtime enforcement, and lifecycle management from creation to consumption.
Where a Data Product Workbench Fits
This is where a data product workbench becomes essential.
If governance defines policy and structure, the workbench ensures that those policies execute at the point of use.
Rather than governance ending in documentation or handover, a workbench model embeds:
  • Policies that travel with the data product
  • Controls enforced automatically
  • Runtime masking and protection
  • Context-aware access evaluation
  • Operational lineage that reflects real behavior
With decades of collective experience across data delivery, one thing is clear:
Trusted, governed data is not optional for AI activation.
But trust does not come from documentation alone.
It comes from enforceable execution.
Latttice, a data product workbench, bridges that gap. It connects governance intent with operational reality. It ensures governance is not just visible, but active.
Not as an overlay.
As an execution layer.
What Business Teams Actually Experience
Business teams do not need more governance documentation.
They need confidence.
Confidence to Combine
Combine data sources knowing controls enforce automatically
Confidence to Train
Train models with assurance that governance travels with the data
Confidence to Automate
Automate decisions without manual intervention or compliance risk
Confidence to Scale
Scale AI initiatives knowing protection is embedded, not bolted on
When governance is active, AI becomes sustainable.
When governance remains static, AI remains constrained.
Cameron's blog lays out the strategic foundation for this shift.
The next phase of governance is not more artifacts.
It is operational enforcement.
Because knowing data exists was never enough.
Join a Data Conversation,
Lili Marsh.
Loading...

References
  • Harvard Business Review. "Why Target's Canadian Expansion Failed."
  • U.S. Securities and Exchange Commission. "SEC Charges Knight Capital With Violations of Market Access Rule," 2013.
  • Basel Committee on Banking Supervision. "Principles for Effective Risk Data Aggregation and Risk Reporting" (BCBS 239).
  • European Central Bank. Supervisory Review and Evaluation Process findings on data governance and risk reporting.
  • McKinsey & Company. "The State of AI" annual reports.
  • Deloitte. Research and insights on data governance operating models and AI scaling challenges.
  • Collibra. Thought leadership on unified governance and lifecycle management of trusted data products.
  • IBM. Publications on Trust and Transparency in AI.
  • Amazon Web Services. Governance and guardrails best practices for AI and data platforms.