In the past 18 months, I have presented to boards at financial institutions, healthcare systems, technology companies, and government agencies across four continents.
The pattern is consistent: the board knows AI is important. The board approves AI budgets. The board has no framework for evaluating whether the AI investments are being managed responsibly.
This is a fiduciary gap. And it is one of the most significant unaddressed governance risks in enterprise organizations today.
The Governance Challenge
Board directors are not expected to be AI engineers. They are expected to ask the right questions, evaluate the quality of answers, and ensure that management has adequate governance structures in place.
The challenge is that most governance frameworks were designed for a world in which consequential decisions were made by humans. In that world, accountability was clear: a named individual made a decision, and that individual could be held responsible.
In the AI world, consequential decisions are made by systems that were designed by teams, trained on data assembled by other teams, and deployed by yet another team. Accountability is diffuse. Errors are statistical. And the board — the ultimate governance body — often has no visibility into any of it.
The EU AI Act changes this equation. It assigns legal liability for AI system failures to the organizations that deploy them. Directors of those organizations can be held personally liable for governance failures. The era of board-level AI ignorance is over.
Ten Questions Every Board Director Should Ask
- →What AI systems are currently in production in our organization, and what decisions do they make?
- →Who is the named human owner accountable for each high-risk AI system?
- →What is our organization's exposure under the EU AI Act, and what is the compliance roadmap?
- →How does our AI governance framework align with ISO 42001?
- →What bias testing was conducted before the most recent AI deployment, and what were the results?
- →What is the appeal and remediation process for individuals affected by automated AI decisions?
- →How is our AI governance structure integrated into our enterprise risk management framework?
- →What is the board's escalation path if an AI system causes a significant harm event?
- →How are we measuring the ROI of our AI investments, and what is the current status?
- →What AI governance training have our executives and board directors received in the past 12 months?
Architecture Implications
Boards that take AI governance seriously need structural mechanisms to fulfill that responsibility.
The most effective approach I have seen is the establishment of an AI and Technology Governance Committee at the board level — distinct from the audit committee, with a mandate that covers AI risk, AI ethics, AI compliance, and AI strategy. This committee should include at least one director with substantive AI knowledge, and should receive quarterly briefings from the Chief AI Officer or equivalent.
Below the board, the organization needs a Chief AI Officer with explicit governance authority — the power to halt AI deployments that do not meet governance standards. This role should have a reporting line to the CEO and a dotted line to the board committee.
The governance architecture should also include an independent AI ethics review panel — a cross-functional body that reviews proposed AI deployments before launch, with authority to recommend modifications or rejection.
Leadership in the AI Era
The boards that will earn the trust of shareholders, regulators, employees, and customers in the coming decade are the boards that treat AI governance as a core element of their oversight mandate — not a technical matter to be delegated entirely to management.
This requires board education. Not deep technical training, but strategic literacy: an understanding of how AI systems work at a level sufficient to evaluate management's explanations, assess the quality of governance structures, and ask questions that reveal whether the organization is truly managing AI risk or simply performing compliance theater.
At Coach Leonardo University, we offer board-level AI governance education programs designed to build exactly this literacy. Our approach is grounded in 30 years of enterprise experience and the Thinking Into Results™ methodology that Bob Proctor personally taught me — a framework that addresses not just the technical architecture of governance, but the leadership paradigm required to make it real.
The Future of AI Governance
Board-level AI governance is not a future requirement. It is a present one.
The organizations whose boards are asking the right questions today — who have established governance committees, who are demanding ISO 42001 compliance, who are holding management accountable for AI risk — these organizations will enter the next phase of AI deployment with the trust infrastructure already built.
Their competitors, whose boards approved AI budgets without asking hard governance questions, will spend the next several years in regulatory remediation and public trust repair.
The choice is being made now. And it is being made at the board level.
Explore More Insights
