TL;DR
- The JTBD + Kano workshop turns customer interviews, outcome scoring, and feature classification into one prioritization process instead of three disconnected exercises.
- JTBD identifies the job and the desired outcome. ODI scoring measures how underserved that outcome is. Kano shows whether the feature behaves like a must-have, performance driver, or delighter.
- The output is not just a research summary. It is a shorter, more defensible feature stack with clearer reasoning for what to build, what to delay, and what to stop debating.
- This article includes two lightweight workshop assets: a JTBD interview script and a Kano survey template.
Most roadmap debates are arguments between incomplete truths.
Sales brings demand. Product brings feasibility. Executives bring strategy. Customer research brings quotes. Usage data brings adoption signals. Each input can be useful and still leave the team unable to answer the real prioritization question: which feature solves an important underserved job in a way that actually changes customer satisfaction?
JTBD alone is not enough because important jobs still need translation into outcomes and feature decisions. Kano alone is not enough because a feature can delight users and still address a weak opportunity. A feature request list is obviously not enough because it confuses requested solutions with underlying progress.
The JTBD + Kano workshop exists to connect the layers properly. It is a research-to-roadmap operating session, not a whiteboard theater session.
What Happens In The Workshop
The cleanest version runs in four stages.
| Stage | Main question | Output |
|---|---|---|
| JTBD interviews | What are customers actually trying to accomplish? | Job statements, switching triggers, and outcome language |
| ODI scoring | Which outcomes are important and underserved? | Opportunity-ranked job and outcome list |
| Kano classification | How does each feature affect satisfaction? | Must-be, performance, delighter, or indifferent map |
| Combined prioritization | What should be built first and why? | Feature priority stack with explicit reasoning |
1. JTBD interviews
The first pass is qualitative. You talk to customers, recent evaluators, or churned accounts and reconstruct the decision story around the job. What pushed them to look? What old workaround failed? What outcome mattered? What anxiety almost stopped the switch?
The goal is not to collect feature ideas. It is to extract stable job statements in the format: When I [situation], I want to [motivation], so I can [outcome].
This is also where the Forces of Progress become useful. Push, pull, habit, and anxiety are often what explain why one superficially similar feature matters more than another.
2. ODI scoring
Once the jobs are visible, the next question is which outcomes are actually underserved. Outcome-Driven Innovation helps here by separating importance from satisfaction. An outcome can matter a lot and still be badly served. That is the kind of gap worth prioritizing.
This is the step that prevents the workshop from drifting into purely qualitative storytelling. Instead of saying "customers mentioned this a lot," the team can say, "this outcome is consistently important and still poorly served."
3. Kano classification
Kano then changes the decision again. It asks how people react when the feature is present and when it is absent. That gives the team a different kind of answer:
- Must-be: absence hurts more than presence helps
- Performance: better execution increases satisfaction more directly
- Delighter: presence creates disproportionate positive reaction
- Indifferent: little satisfaction change either way
That matters because two features can address the same broad job and still behave very differently in customer perception.
4. Combined prioritization
The final stage combines the opportunity layer and the satisfaction layer. In the product system that this workshop came from, the combined priority score weighs four inputs: JTBD opportunity, Kano impact, expected retention impact, and implementation effort.
You do not need to worship the formula. The point is to force the tradeoffs into the open so that overrides are deliberate instead of political defaults.
Use the lightweight workshop assets
The full workshop system is larger, but these two templates are enough to start a serious first pass: one interview script for job discovery and one Kano survey template for feature classification.
Why JTBD and Kano Need Each Other
The easiest way to misunderstand this workshop is to treat one method as a substitute for the other.
JTBD without Kano
JTBD tells you what people are trying to get done and which outcomes feel underserved. What it does not tell you cleanly is how a feature's presence or absence changes satisfaction dynamics. Two features can address the same job and still create very different kinds of reaction once they are live.
Kano without JTBD
Kano tells you whether a feature behaves like a must-have, performance feature, or delighter. What it does not tell you is whether the underlying job is strategically important enough to deserve scarce roadmap capacity. A delighter attached to a weak opportunity is still not a strong priority.
JTBD / ODI answers whether the opportunity is worth solving. Kano answers what kind of satisfaction effect the solution creates. The workshop becomes useful when those two answers stop living in separate decks.
This is also what makes the workshop distinct from the existing JTBD roadmap loop article on the site. That article explains the broader learning system. This workshop article is about the actual research and prioritization mechanics that feed that system.
What The Team Leaves With
A good workshop should not end with a pile of sticky notes and a vague promise to "digest the findings." It should end with artifacts the roadmap can use immediately.
A clean job stack
Not just scattered interview insights. A ranked set of job statements and desired outcomes that the team can revisit later.
A feature classification map
Not just a list of proposed features. A view of which features are table stakes, which create meaningful performance gains, and which truly change the perceived experience.
A combined priority stack
This is the real handoff. The team should be able to point to the top features and explain why they are above the line: strong opportunity, strong satisfaction effect, plausible retention value, and reasonable effort relative to alternatives.
A shorter debate surface
This is often the hidden win. The workshop removes a surprising amount of noise. Some features stop looking urgent once their job/opportunity score is weak. Others stop looking optional once their absence is clearly a must-be problem.
The workshop is strongest when it feeds a real operating loop
The workshop should not end in a research archive. It should flow into the roadmap, feature instrumentation, and the adoption-to-retention loop that tells the team whether the prioritization logic was right.
Common Workshop Mistakes
Most failed JTBD/Kano efforts collapse for avoidable reasons.
Going straight to surveys
Without interviews, the survey often measures the team's assumptions rather than the customer's actual job structure. Interviews should come first because they reveal which jobs and outcomes deserve measurement in the first place.
Writing solution statements instead of outcomes
"Add a dashboard" is not an outcome. "Reduce the time it takes to identify which accounts are at risk" is. The more solution-biased the workshop language gets, the less useful the prioritization will be.
Using internal jargon in the Kano survey
If the feature description reads like a product spec, the satisfaction data will be noisy. The descriptions need to sound like the user's world, not the product team's architecture diagram.
Treating the formula as an autopilot
The combined score should start a conversation, not replace judgment. Strategic context can still override a score. What matters is that the override is explicit and defendable.
Ending with research instead of decisions
The workshop should always produce a sharper feature priority stack. If the team leaves with only insights and no clearer line between now, later, and no, the process stopped too early.
FAQ
How many interviews do you need for a useful first pass?
Usually 5-12 interviews are enough to create a serious first draft of the job landscape. Smaller companies can often start there with qualitative scoring before they add a formal ODI or Kano survey layer.
Do you need both an ODI survey and a Kano survey every time?
No. Early-stage teams can run a lighter workshop with interviews plus qualitative scoring. As the stakes rise, the survey layer becomes more valuable because it makes tradeoffs easier to defend.
Can this work for existing products, not just net-new features?
Yes. In many cases it is more useful for existing products because it reveals which current features are really must-haves, which are weak differentiators, and which create very little customer value despite internal effort.
What makes this different from a normal prioritization workshop?
Most prioritization workshops start with feature ideas and argue downward. This one starts with customer jobs and satisfaction logic, then works forward into features.
If the roadmap still feels like politics, the customer evidence layer is probably too weak.
Run the workshop when the team needs a clearer link between what customers are trying to do, how features affect satisfaction, and what deserves the next block of roadmap capacity.