Talking about AI with customers (without guessing their questions)
How to sell AI to customers: skip generic demos, ask diagnostic questions, define outcomes, manage risk, and prove value with outcome-linked metrics and pilots.
Customers will try AI products. That’s not the problem. The problem is getting it to stick.
Trying AI is easy. Operationalising it is hard.
But unless it’s tied to an outcome it doesn’t stick.
Faster. Cheaper. Safer. Less painful.
The questions customers ask about AI vary wildly.
Industry. Regulation. Risk tolerance. Procurement. Culture. Who’s in the room.
Predicting questions helps. But I start with something more useful: what they’re optimising for.
- Is this worth it?
- Is it safe enough?
- Will it actually get used?
You don’t get to those answers by doing a generic demo/pitch.
You get there by asking better diagnostic questions.
The problem: most AI conversations start in the wrong place
AI makes it easy to dazzle.
Demos. Features. Agents. Autonomous workflows.
All can be magical.
Customers will try capabilities. They adopt outcomes.
So the goal of the conversation is not to prove AI is clever.
It’s to agree:
- what changes,
- what “good” looks like,
- and what happens when it goes wrong.
The diagnostic questions I use (and why they work)
These aren’t “AI questions”.
They’re operational questions that reveal what matters in their world.
1) “Where does this workflow slow you down today?”
You’re looking for friction, delay, handoffs, rework, bottlenecks.
Once you name the constraint, you can measure the improvement.
2) “What do you do today when things go wrong?”
This is the fastest way to find the real risk controls. Escalations, manual checks, approvals, second pairs of eyes. AI needs to fit into this system and not pretend it doesn’t exist.
3) “What’s the cost of being wrong here?”
This question sets the safety bar.
Being wrong summarising a news article can be very different from being wrong in:
- legal/compliance,
- finance,
- regulated comms,
- customer-facing commitments.
It tells you whether the first step is assistive, supervised, or never automated. Balancing auditability with speed.
4) “Who gets blamed if this fails?”
This sounds blunt, but it’s the truth.
It reveals:
- the political risk,
- the internal buyer vs user split,
- how conservative the rollout needs to be.
If you don’t know where blame lands, you’ll design the wrong adoption path.
5) “What data is sensitive in your world?”
Not “are you worried about data security?” (everyone is, when it fails).
This question finds the boundaries:
- customer PII,
- contracts,
- health data,
- pricing,
- IP,
- internal strategy.
And once you know the boundaries, you can design around them rather than arguing about them.
6) “What would make you say ‘this is working’ in 30 days?”
This forces outcomes.
Customers often default to activity metrics (“usage”, “number of prompts”).
Bring it back to a business signal:
- cycle time,
- error rate / rework rate,
- time-to-resolution,
- throughput per head,
- escalation rate.
The metric changes. The principle doesn’t: it must be outcome-linked metrics only.
7) “What’s the smallest pilot you’d actually trust?”
This question kills fantasy and creates momentum.
It turns “we need to see it working” into:
- scope,
- constraints,
- review process,
- success criteria.
And it creates a path that procurement and compliance can live with.
What these questions are really doing
They give you a way to agree three things early:
- Value: What changes in the workflow, and what outcome improves?
- Trust: What are the boundaries, and what happens when it’s wrong?
- Adoption: How do we roll this out without breaking the organisation?
A practical way to run the conversation
Once you’ve asked the diagnostics, you can summarise back in a way that builds confidence:
- “Here’s the workflow we’re changing.”
- “Here’s what good looks like.”
- “Here’s where we’re being conservative.”
- “Here’s how we’ll measure it.”
- “Here’s the rollout path.”
This then becomes the email they can forward internally - whether that’s procurement, security, or the exec sponsor. Whoever needs to say yes.
Final thought
AI conversations go wrong when we assume every customer worries about the same things. They don’t. But every customer is trying to make a decision.
So don’t guess their questions. Ask the questions that reveal what they’re optimising for, what risk they can tolerate, and what proof they need to move.
That’s how AI becomes an operational change, not a magic trick.
About Riaz
I've spent over 20 years building and scaling B2B products, services and marketing technology - from early-stage startups through to exits, and now as CEO of Radiate B2B - the B2B ad platform.
Along the way I've led teams, launched products, built and sold companies, and spoken around the world about data, AI and the future of marketing and work.
Today I split my time between working directly with companies as a consultant and fractional operator, mentoring founders and leaders, and speaking to audiences who need someone to translate what's happening in technology into decisions they can act on.
Read the full storyRecommended posts
Talking about AI with boards (without the "left behind" spiral)
How to discuss AI with boards: move from hype to business decisions. Define the bet, ROI timeframe, risks, guardrails and signals that prove value.
Read more
The web is dying
Unbelievably, the web has been around for three decades. We are at the start of a new journey that will see the web as we know it disappear.
Read moreO Canada. Artificial Intelligence's cradle and the death of the musician?
This week looks behind the scenes of artificial intelligence, gaps today and changes already in motion.
Read more