Talking about AI with boards (without the "left behind" spiral)
How to discuss AI with boards: move from hype to business decisions. Define the bet, ROI timeframe, risks, guardrails and signals that prove value.
Nearly every board AI conversation I’m in starts the same way.
“We’re worried about being left behind.”
That’s a completely natural initial reaction right now. People who are working at the edge of AI capabilities are struggling to keep up with all the change.
The real questions people should answer are:
- What will actually change?
- Where is the biggest opportunity?
- What’s the threat?
- How long until we see ROI?
If you can answer those clearly, you don’t need big claims.
You just need to turn AI from a topic into a set of business-centric decisions.
The problem: most people lead with noise
AI is big, shiny and often seems magical.
There are: Demos. Tools. Pilots. Workshops. Agents. Another vendor.
A lot of activity. Not much clarity.
Just noise.
The board doesn’t need a tour of the models.
It needs a view on:
- what changes,
- how you’ll govern it, and
- how you’ll know it’s working.
Start with the bet (not the tool)
A strategic bet isn’t a gamble. It’s a testable hypothesis about behaviour. The simplest way to talk about AI with a board is to reduce it to the following:
- Which workflow are we changing? (Be specific.)
- Baseline: what does it cost today? (time, money, risk, delay)
- Hypothesis: what do we believe will change?
- Signal: what outcome will move if it’s working?
- Decision rule: what do we do if it works (scale?), and what do we do if it doesn’t?
- Guardrails: what are the boundaries? (data, escalation, audit)
If you don’t know this, you really just have a brainstorm.
A note on ROI (because boards will ask)
Be realistic about timeframes. Most organisations see the first returns in phases:
- Weeks: assistive use inside a team (speed + consistency)
- Months: workflow changes with human-in-loop controls (throughput + quality)
- Quarters: deeper operating model shifts (roles, handoffs, systems)
If you present “transformational ROI” with no timeline, you’re basically asking the board to fund belief.
Where the opportunity usually is
The honest answer is: where the business has friction.
In most organisations, the first real gains aren’t glamorous. They’re operational:
- customer support triage / faster resolution
- internal knowledge retrieval (stop reinventing answers)
- sales/admin workflows (drafting, summarising, routing, QA)
- finance/ops tasks that are repetitive, rules-based, and error-prone
AI works best where the system is already defined and the constraint is time, throughput, and consistency.
But note: if the underlying process is broken, AI will scale the mess faster.
What the threat actually is
There are threats. Most board decks ignore them (the old elephant in the room thing), which is why boards stay nervous.
The threat isn’t competitors will use AI. The threats are more boring — and more dangerous:
- Execution risk: pilots everywhere, adoption nowhere
- Quality risk: plausible errors entering real workflows
- Data/vendor risk: unclear boundaries, hidden dependencies
- Reputational risk: one visible failure breaks trust
A board doesn’t need certainty, but it does need to see that you’re addressing them rather than hoping they don’t happen.
Here are a few examples of what can go wrong:
1. Apple Intelligence notification summaries paused (January 2025)
What happened: Apple paused AI notification summaries for news/entertainment apps in iOS 18.3 after widely reported inaccurate summaries. (BBC, 2025)
Failure: “Authoritative” summarisation of third-party content without sufficient accuracy guarantees.
2. Deloitte Australia refunded government work after AI-generated errors (July 2025)
What happened: Deloitte Australia agreed to a partial refund after a government report contained apparent AI-generated issues including fabricated references; a revised version was issued later. (The Guardian, 2025)
Failure: AI-assisted drafting without robust citation verification and QA for a formal, high-stakes deliverable.
There are plenty more examples. The point isn’t to panic. The point is to treat quality and trust as first-order design constraints.
You can read more AI fails here.
Final thought
The fear of being left behind is a signal. Not a plan.
A good board conversation turns that signal into:
one or two clear bets,
outcome-linked signals,
realistic timelines,
and guardrails that match the risk.
Then you can move from “AI strategy” as a topic to “AI” as part of the operating system of the company.
About Riaz
I've spent over 20 years building and scaling B2B products, services and marketing technology - from early-stage startups through to exits, and now as CEO of Radiate B2B - the B2B ad platform.
Along the way I've led teams, launched products, built and sold companies, and spoken around the world about data, AI and the future of marketing and work.
Today I split my time between working directly with companies as a consultant and fractional operator, mentoring founders and leaders, and speaking to audiences who need someone to translate what's happening in technology into decisions they can act on.
Read the full storyRecommended posts
Talking about AI with customers (without guessing their questions)
How to sell AI to customers: skip generic demos, ask diagnostic questions, define outcomes, manage risk, and prove value with outcome-linked metrics and pilots.
Read more
The web is dying
Unbelievably, the web has been around for three decades. We are at the start of a new journey that will see the web as we know it disappear.
Read moreO Canada. Artificial Intelligence's cradle and the death of the musician?
This week looks behind the scenes of artificial intelligence, gaps today and changes already in motion.
Read more