Case Study — Healthcare SaaS · JTBD Research

Their JTBD framework had 27 jobs. We coded 60 real sales calls. Found 85 more they didn’t know existed.

A healthcare forms platform assumed they knew what buyers hired them to do. Structured coding of their sales pipeline against the JTBD framework revealed the assumptions were off by 3x — and that the #2 priority feature had been overestimated by 45%.

60
Calls coded
85+
New jobs discovered
45%
Feature overestimation corrected
27→112+
Total jobs in revised framework
Stack Python JTBD Kano Firecrawl

Before.

The client had built a JTBD framework before we engaged — 27 jobs, defined persona hypotheses, feature priorities based on team intuition. The sales team was converting at an 18% blended win rate, below the 25% healthcare SaaS benchmark. The Essential Plan was outperforming at 38% — twice the overall rate — but nobody had asked why.

Feature roadmap decisions were being made against an unvalidated framework. Three key problems: the framework was built from memory of sales conversations, not from coded transcripts; persona distribution was estimated, not counted; and feature priority rankings assumed deals broke on integration compatibility — a claim nobody had checked against the data.

The Situation
  • 27 assumed jobs — derived from team recall, not structured analysis
  • Feature priority built on gut: Integration ranked #2, HIPAA at #3
  • Persona mix assumed: physician-led buying at 38–52% of pipeline
  • No frequency data on what actually appeared in conversations that closed

What we did.

Systematic coding of every sales call against the JTBD framework — then validated feature priorities and persona distribution against what the data actually showed.

Step 1 — Sales Call Coverage Planning
Selected 60 of 82 available calls (73% coverage). Excluded 22 calls: 8 lacked recordings, 9 were under 4 minutes (insufficient signal), 5 were duplicate contacts. Built a coding schema with 47 variables per call: job mentions (Y/N + severity), persona type, feature requests, deal outcome, segment type, and objection categories.
Step 2 — Call Coding with Kano Classification
Each call coded independently against the 27-job framework. For every job mention: frequency (Y/N), severity (blocking / important / nice-to-have), persona (who raised it), deal correlation (did it appear more in wins vs losses). All 40+ features simultaneously classified using the Kano model — must-haves (threshold requirements), performance attributes (more is better), and delighters (unexpected value). Used Python to track co-occurrence patterns: which jobs appeared together in closing vs lost deals.
Step 3 — Framework Gap Analysis
After coding, ran frequency distribution across all calls. Discovered the original 27 jobs accounted for only a subset of what buyers actually discussed. New job categories emerged from patterns: paper-to-digital migration (57% frequency — not in the original framework at all), form packet bundling (22%), mobile completion anxiety (67% implicit), multi-location synchronisation (35% in Enterprise calls). Total jobs after structured coding: 112+, with 85+ new to the framework.
Step 4 — Feature Priority Validation
Cross-referenced feature request frequency against deal outcomes. Integration compatibility (ranked #2): appeared in 52% of calls but as a deal-breaker in only 13%. The team had been treating a common discussion point as a primary close-blocker. Manual data entry elimination (ranked #1): confirmed at 100% frequency — the only job that appeared in every single recorded call. HIPAA compliance: elevated from #3 to co-#1 at 97% — nearly universal, but previously treated as assumed table stakes.
Step 5 — Persona Distribution Audit
Counted actual persona matches across all 60 calls. Office managers and practice administrators: 60% of pipeline — not the 38–52% assumed. Physician-led buyers: 17%, not the 38–52% assumed (overestimated by 2–3x). Startup practice founders: 7% — not zero as originally assumed, but also not the 52% in one prior estimate. Multi-location operators: 15% of calls.
Step 6 — Moat Analysis Against 35 Competitors
Mapped 2 unique differentiators against the full competitive set. Synced Forms (multi-location synchronisation) — zero of 35 competitors offer an equivalent. Drawing Tools (clinical annotation) — 1 of 35 competitors offers any version. Form Packets — 4 of 35 competitors offer it; the client’s implementation is more advanced. These weren’t marketing claims — they were validated against coded competitor feature lists cross-referenced with G2, Capterra, and product documentation scraped via Firecrawl.

After.

112+
Total jobs in revised JTBD framework — was 27, a 4x expansion
100%
Manual entry elimination frequency — confirmed as the universal baseline job across every call
45%
Integration feature overestimation corrected — claimed 88%, actual 43%. Demoted #2 → #4
38%
Essential Plan win rate discovered — 2x the blended rate, a structural insight the team had missed
2
Unique differentiators confirmed against 35 competitors — Synced Forms and Drawing Tools, verified not assumed
13%
Actual deal-blocking rate for Integration features — was being treated as the primary close condition

What you can do now.

Your roadmap is now sequenced against actual buyer signal, not assumed priorities. Integration dropped from #2 to #4 — not because it’s unimportant, but because only 13% of buyers who mentioned it made it a close condition. HIPAA co-leads the roadmap because it appeared in 97% of calls across every segment.

Your sales team has a validated qualification framework. The Essential Plan converts at 38% — twice the overall rate — because the buyer profile (single-location office manager, self-serve ready) is structurally different from the mid-market buyer your sales process was built around. Those two need different motions.

Your competitive positioning is grounded in verified differentiation. Synced Forms isn’t a feature — it’s a structural moat. Zero of 35 competitors offer it. That’s not a claim. It came from systematic competitor analysis, not marketing assumptions.

Jake McMahon
Jake McMahon
ProductQuant

10 years building growth systems for B2B SaaS companies at $1M–$50M ARR. BSc Behavioural Psychology, MSc Data Science. This engagement required building a structured coding schema across 60 sales calls, resolving contradictory persona assumptions, and cross-referencing feature priorities against deal outcomes to separate signal from noise.

What this looks like for your company

The Foundation.

A six-week engagement that includes JTBD research as one layer of a full growth infrastructure build — connecting customer job data to product roadmap, analytics, and positioning.

  • Structured JTBD analysis: sales call coding, job frequency validation, persona accuracy audit
  • Full analytics audit with 5–10 biggest revenue gaps sized and implementation-ready
  • Churn prediction model trained on your data; weekly at-risk list from week one
  • Competitive intelligence: 15+ competitors mapped with ongoing monitoring system
  • Full handover documentation; your team runs everything independently from day one
$15,000–$25,000 · 6 weeks
Right for you if
  • Product roadmap built on assumed customer needs rather than validated jobs-to-be-done
  • Sales call data sitting unused — no systematic process to extract product intelligence from it
  • Want research that connects directly to roadmap decisions, not a standalone report

Built on assumptions you’ve never validated?

Most JTBD frameworks are built once, from memory. Then used for years without being tested against the data that would confirm or contradict them. A structured validation engagement typically takes 3–4 weeks. The conversation to decide if it’s relevant takes 15 minutes.