TL;DR

  • A JTBD workshop takes 2-3 hours with 4-6 people: a product manager, designer, engineer, and customer success rep. The goal is to agree on the core job your product serves and map your roadmap to it.
  • Prerequisite: 5-10 switch interviews completed before the workshop. Without interview data, the workshop is a guessing exercise. With interview data, it becomes a synthesis exercise.
  • The 4 activities: draft the job statement from interview data, map desired outcomes and rate importance and satisfaction, calculate opportunity scores to prioritize, and map existing features to the job to identify gaps.
  • The output is a job statement canvas, a prioritized feature list ranked by opportunity score, and a gap analysis showing which job steps your product serves well and which it does not.
  • The biggest trap: teams try to agree on the job statement in the first 10 minutes. Spend the first 30 minutes sharing interview findings. The job statement emerges from data, not from debate.

Why Most Product Teams Build the Wrong Features

Wrong features. Right team. Broken process.

Product teams at B2B SaaS companies ship features that nobody asked for. Not because the team is incompetent, but because the prioritization process is broken. The roadmap gets built from the loudest opinions in the room — not from evidence about what retained customers are actually trying to accomplish.

This costs real money. A SaaS company with 20 engineers shipping 2 features per sprint is investing roughly $150,000 per sprint in development. If half those features do not advance the core job, you are burning $75,000 every 2 weeks on work that does not move retention or revenue.

The job statement should make half your feature requests obviously irrelevant.

The JTBD workshop fixes this by forcing the team to agree on the job before discussing features. Once the job is defined, every feature request gets evaluated against one question: does this help the customer complete the job? That single filter eliminates months of wasted engineering time.

"The workshop does not produce a roadmap. It produces alignment. A roadmap is just the artifact. The real output is a team that agrees on the job, the outcomes, and the priorities."

— Jake McMahon, ProductQuant

If your team has already run switch interviews and has interview summaries, this guide gives you the exact agenda to turn that data into a prioritized roadmap. If you have not run switch interviews yet, start with our JTBD interview script first, then come back.

The Workshop Agenda: 5 Activities in 2.5 Hours

Five activities. Strict order. No shortcuts.

The workshop follows a strict sequence. Each activity builds on the previous one. Do not skip steps. Do not reorder them. The structure exists to prevent the team from jumping to solutions before understanding the job.

JTBD Workshop: 2.5-Hour Facilitation Agenda
A structured sequence for moving from raw research to a prioritized roadmap.

Preparation: Who Attends and What to Bring

Right people. Right data. Right outcome.

The workshop requires 4-6 people. More than 6 and it becomes a debate. Fewer than 4 and you miss critical perspectives. The required attendees are the product manager who owns the roadmap, the designer who understands the user experience of the job, an engineer who understands technical feasibility, and a customer success representative who hears the job from customers every day. You can optionally include a sales rep or founder.

Before the workshop, prepare 5-10 switch interview summaries. Each summary should be one page capturing the trigger, the old solution, the evaluation, the switch, and the job. Bring a whiteboard or Miro board for the job statement canvas, sticky notes for outcome mapping, and a timer. Each activity has a strict time limit.

Do not prepare a roadmap. The workshop produces the roadmap. If you bring one, you will defend it instead of building it.

The insight: The right attendees with the right data produce alignment. The wrong attendees produce debate.

Activity 1: Share Interview Findings (30 Minutes)

The first activity gets everyone on the same page about what the interviews revealed. Each person reads 2-3 interview summaries aloud, focusing on what the customer was trying to accomplish, what pushed them away from the old solution, what pulled them toward your product, and what almost stopped them.

The rules are simple. No debate during sharing. Just share. Write recurring themes on the board. If someone says "our customer wants X," ask "which customer said that?" The output is a shared understanding of what retained customers are trying to accomplish.

In practice, this sounds like one team member reading: "Customer A was a VP of Analytics at a $50M company. They used spreadsheets and Amplitude. The trigger was a board meeting where they could not answer a question about retention. The job was to compile a defensible narrative for the board. They chose this product because it produces board-ready reports in one click."

Another team member reads: "Customer B was a founder at a $3M company using Mixpanel. The trigger was hiring their first analyst who could not figure out how to build custom reports. The job was to get answers without needing SQL. They chose this product because the drag-and-drop builder meant they did not need to hire a data analyst."

The recurring theme: both customers needed to answer business questions without technical expertise. That is the job. Not "product analytics." The distinction matters because it changes which features you prioritize.

The insight: When you read interviews aloud as a team, the job writes itself. Debate only happens when you skip this step.

Activity 2: Draft the Job Statement (20 Minutes)

The job statement follows a specific format: When [context], I want to [true job], so I can [desired outcome]. Each person writes their own job statement on a sticky note. Then the group groups similar statements, finds the common thread, drafts one statement that captures it, and reads it aloud to check against what customers actually said.

There are 2 critical rules here. No feature names in the job statement. The job exists without your product. If someone says "our product helps them do X," reframe it as "they want to X." Second, the job statement should be specific enough to evaluate features against.

In a real workshop, the team attempted several job statements before landing on the right one. Their first attempt was too broad: "When they have data, they want to analyze it, so they can make better decisions." Every analytics product could claim this. Their second attempt was better but still feature-focused: "When they need to report to stakeholders, they want to create dashboards, so they can show progress."

The final job statement, after 15 minutes of iteration, was: "When my board meeting is in 3 days, I want to compile our key metrics into a defensible narrative so I can answer any question the board asks." This is specific enough to evaluate every feature against. Does this feature help someone compile a defensible narrative under time pressure? If yes, build it. If no, do not.

The insight: A good job statement makes half your backlog obviously irrelevant. If it does not, it is too broad.

Activity 3: Map Outcomes and Score Opportunities (45 Minutes)

This is the most important activity. It transforms vague customer needs into a quantified prioritization that the entire team agrees on. The process has 4 steps.

Scoring Desired Outcomes: Importance vs. Satisfaction
Using the Opportunity Score formula to identify the most critical market gaps.

First, break the job into 5-8 steps. For "compile a board-ready report," the steps might be: identify which metrics the board cares about, pull data from all sources, synthesize into a narrative, build the presentation, and practice the delivery.

Second, for each step, list 3-5 desired outcomes. When pulling data from all sources, a customer might want to export data in a consistent format, avoid manual copy-pasting, know when data is stale, and compare to last quarter automatically.

Third, rate each outcome on importance from 1-5 and satisfaction from 1-5. Use the team's collective judgment informed by the interview data.

Fourth, calculate the Opportunity Score using this formula: Importance + max(Importance minus Satisfaction, 0). Outcomes scoring above 8 are your highest-priority gaps. Outcomes below 6 are already well-served. Do not invest more in outcomes that are already well-served.

Outcome Importance Satisfaction Opp. Score Priority
Export data in consistent format 5 2 8 Medium
Avoid manual copy-pasting 5 1 9 High
Know when data is stale 4 1 7 Medium
Compare to last quarter automatically 5 1 9 High
Auto-generate executive summary 4 1 7 Medium
Pull data from all sources in 1 click 5 2 8 Medium
Flag anomalies before the board sees them 5 1 9 High

The 3 outcomes scoring 9 are the roadmap priorities. They carry high importance and low satisfaction, which is exactly where investment pays off. Every feature built this quarter should advance one of these 3 outcomes.

The insight: Opportunity scores replace political debates with arithmetic. The highest score wins, every time.

A feature without a job to advance is just engineering time spent on an opinion. The opportunity score turns opinions into numbers.
Free Resource

Get the JTBD Workshop Exercise Templates

6 exercises with a complete FigJam board, outcome scoring spreadsheet, and synthesis playbook. Everything you need to run this workshop yourself.

Activity 4: Map Features to Outcomes (30 Minutes)

For each high-opportunity outcome, ask whether your product currently serves it. If yes, note the feature. If no, note the gap and estimate the effort: quick fix, medium effort, or major project. This gap analysis shows exactly where your product is over-serving and under-serving the job. This gap analysis becomes the input for the final activity.

The insight: The gap analysis shows exactly where your product is over-serving and under-serving the job. Teams are always surprised by both.

Activity 5: Build the Roadmap (25 Minutes)

The prioritization rules are straightforward. High opportunity score plus quick fix means do it this sprint. High opportunity score plus medium effort means plan for next quarter. High opportunity score plus major project means evaluate ROI and consider a dedicated project. Low opportunity score means do not build it, regardless of how easy it is.

The output is a roadmap where every item advances a specific outcome for the core job. There are no orphaned features on this roadmap. Every feature has a job, an outcome, and a score that justifies its place.

For teams that want to go deeper into the combined JTBD and Kano framework for feature categorization, see our JTBD and Kano workshop guide which extends this process with Kano categorization for each outcome.

The insight: A roadmap built from opportunity scores has no orphans. Every feature ties to a job, an outcome, and a number.

What This Looks Like in Practice: A Real Workshop Pattern

Patterns repeat. The good teams do the same things. The struggling teams repeat the same mistakes.

After running this workshop format across multiple SaaS teams, clear patterns emerge. The teams that get the most value from the workshop share specific characteristics, and the teams that struggle share different ones. Understanding these patterns helps you avoid the most common failure modes before they happen.

Pattern 1: Teams That Bring Data Win

Teams that arrive with 5-10 completed switch interviews finish the workshop with a clear job statement and a prioritized roadmap. Teams that arrive without interview data spend the entire 2.5 hours debating what the job might be. The difference is not subtle. One team leaves with alignment. The other leaves with more confusion than they started with.

The insight: Data before the workshop produces alignment after it. No data before the workshop produces debate that never ends.

Pattern 2: The Job Statement Changes Minds

In nearly every workshop, at least one team member realizes that a feature they have been championing does not advance the core job. The job statement acts as a filter that makes some features obviously irrelevant. This is uncomfortable in the moment and liberating in hindsight. It is also the single biggest source of roadmap clarity the team will experience all year.

The insight: The most valuable feature cut is the one your most vocal champion wanted built. If nobody loses a feature, the job statement is too broad.

Pattern 3: Outcome Scores Resolve Conflicts

When two team members disagree about which feature to build next, the opportunity score settles the debate. The feature that advances a higher-scoring outcome wins. This replaces political arguments with arithmetic. Teams that commit to this discipline ship faster because they stop re-litigating the same decisions every sprint.

The insight: Arithmetic beats politics. Every time. The team that learns this ships twice as fast in half the time.

The workshop does not produce a roadmap. It produces a team that agrees on the job. The roadmap is just the artifact.
$75,000

Estimated wasted engineering spend per sprint when 50% of features do not advance the core job. A single JTBD workshop eliminates this waste by creating a shared prioritization framework.

Dimension Workshop with Data Workshop without Data
Time to job statement 15-20 minutes 45+ minutes of debate
Team alignment after workshop High — grounded in customer data Low — based on opinions
Roadmap confidence Every item ties to an outcome score Items tied to loudest voices
Follow-through rate 80%+ of prioritized items get built 30-40% — priorities shift weekly

These patterns are why the prerequisite of completed switch interviews is non-negotiable. The workshop is a synthesis exercise, not a discovery exercise. If you need help running the interviews, our switch interview script gives you the exact questions to ask.

Related Offer

Done-With-You JTBD Workshop Setup

Get the complete JTBD and Kano Workshop package: 6 exercises, FigJam board, outcome scoring spreadsheet, and synthesis playbook. Self-guided with everything you need to run this yourself.

What to Do Instead of Your Current Prioritization Process

Four changes. One discipline. No more guessing.

If your team is currently prioritizing features based on executive opinions, customer requests without context, or competitive feature checklists, here is the right approach.

  • Run switch interviews before every workshop. Complete 5-10 interviews with retained customers. Use the timeline reconstruction method to capture the full story of why they chose your product. This is the only valid input for a JTBD workshop.
  • Use opportunity scoring instead of voting. When the team debates which feature to build, calculate the opportunity score for the outcome it advances. The score replaces political arguments with a number everyone agreed on during the workshop.
  • Filter every feature request through the job statement. When a stakeholder requests a feature, ask which job outcome it advances. If it does not advance one, it does not make the roadmap. This is not harsh. It is disciplined.
  • Re-run the workshop quarterly, not monthly. The core job does not change that fast. Running the same exercise too often produces diminishing returns as the team starts manufacturing answers. Between workshops, use the job statement as a filter for every feature request.

The alternative to the JTBD workshop is not nothing. It is whatever prioritization process you are using right now, and that process is already producing a roadmap. The question is whether that roadmap advances your core job or not. The workshop gives you a way to know.

For teams that want to layer Kano analysis on top of JTBD outcomes to further categorize features as must-haves versus delighters, our Kano analysis guide walks through the combined workflow.

FAQ

How often should we run a JTBD workshop?

Every 6-12 months, or whenever you launch a major new product area. Between workshops, use the job statement as a filter for every feature request. Do not run workshops more frequently than quarterly because the job does not change that fast. Running the same exercise too often produces diminishing returns as the team starts manufacturing answers.

The insight: If you run workshops more than quarterly, you are manufacturing answers from fatigue, not discovering them from data.

Can we run this workshop remotely?

Yes. Use Miro or FigJam for the whiteboard, break-out rooms for interview sharing, and a shared spreadsheet for outcome scoring. Remote workshops actually work better than in-person ones because everyone can see the board simultaneously. The output is automatically documented, and quieter team members contribute more freely through the digital interface. We have run more remote JTBD workshops than in-person ones with better outcomes.

The insight: Remote workshops produce better documentation by default. The board is the output, not a photo of a whiteboard.

What if our product serves multiple jobs?

Run a separate workshop for each job. But first validate that these are genuinely different jobs and not the same job with different contexts. Most products serve 1 core job and 2-4 secondary jobs. Start with the core job and expand to secondary jobs once the team has mastered the process for the primary one.

The insight: Most "multiple jobs" are actually one job with different contexts. Validate before you schedule extra workshops.

How do I calculate the opportunity score?

The formula is: Importance + max(Importance minus Satisfaction, 0). Rate each desired outcome 1-5 on importance and 1-5 on satisfaction. Outcomes scoring above 8 are your highest-priority gaps. They matter to customers and you are not serving them well. Outcomes below 6 are already well-served and do not need more investment.

The insight: The opportunity score tells you where to invest and where to stop. Above 8: build. Below 6: stop.

What if someone disagrees with the job statement after the workshop?

Point them to the interview data. The job statement is not an opinion. It is a synthesis of what 5-10 retained customers said they were trying to accomplish. If a team member disagrees, the remedy is not debate but another round of interviews. If the new interviews support a different job statement, update it. If they do not, the data wins.

The insight: Disagreement with the job statement is disagreement with the data. Run more interviews. The data always wins.

Sources

The following sources informed the workshop approach and opportunity scoring framework described in this article.

The insight: Workshop effectiveness comes from structured facilitation, not raw creativity. A disciplined process produces better job statements than free-form brainstorming.

Jake McMahon

About the Author

Jake McMahon builds growth infrastructure for B2B SaaS companies: analytics, experimentation, and predictive modeling that turns product data into revenue decisions. He has facilitated JTBD workshops across multiple engagements, connecting the outputs directly to product roadmaps and pricing design. In one engagement, a single JTBD workshop reduced a 47-item backlog to 9 prioritized features, all tied to a single core job statement. He writes about the frameworks and tools he uses to help product teams stop guessing and start serving jobs. Read the complete JTBD guide for the full framework.

Next Step

Run Your First JTBD Workshop This Month

Get the complete JTBD and Kano Workshop package with everything you need: 6 exercises, a FigJam board, outcome scoring spreadsheet, and synthesis playbook. Self-guided, ready to run.