PRICING AUDIT

Jake McMahon
Jake McMahon — ProductQuant
8+ years B2B SaaS · Behavioural Psychology + Big Data (Masters)

Your pricing is either capturing the value you create — or it isn’t. This sprint tells you which, and what to change.

Know whether your pricing captures the value you create, what to change if it doesn't, and how to test it before your customers notice. You finish with a clear recommendation and a ready-to-run experiment design.

Fixed price · experiment design included · clear recommendation or full refund

WHAT YOU HAVE AT THE END

Tier usage picture Which features drive upgrades — and which ones customers barely touch
Willingness-to-pay analysis What customers will pay more for, and what they expect regardless of tier
Upgrade & churn signals Behavioural patterns from Stripe: timing, downgrade triggers, trial conversion
Restructure scenarios 2–3 options with directional impact and an experiment design for each
90-min strategy readout Live session with your CPO and growth lead + session recording

Fixed price · experiment design included · clear recommendation or full refund

We audit your pricing and tell you what to change.

You get a clear report showing where you're leaving money on the table, which tiers aren't working, and exactly what to adjust.

REVENUE LEAK

Your best customers are all on your cheapest plan.

We find that your highest-usage accounts pay the same as everyone else. You get a specific recommendation for a usage-based tier that captures the value you're already delivering.

CONVERSION DROP

Prospects visit your pricing page and leave without signing up.

We analyse where visitors drop off and which plan names or feature lists confuse them. You get a redesigned pricing structure that removes the friction.

EXPANSION REVENUE

Existing customers never upgrade because there's no reason to.

We identify the usage thresholds where customers should naturally move up. You get upgrade triggers and packaging changes that make expansion feel obvious, not pushy.

COMPETITIVE RISK

Your sales team keeps losing deals on price.

We compare your packaging against what buyers actually value. You get a positioning shift that reframes the conversation from cost to outcome — so price becomes secondary.

A CLEAR RECOMMENDATION
Change or don’t

You leave knowing whether to change your pricing, what to change, and what it’s worth to test. Not a list of options with no priority.

TEST BEFORE YOU COMMIT
Every scenario

Pricing changes affect every existing customer. Each restructure scenario comes with an experiment design so you validate direction before committing.

FIXED PRICE
Fixed price

Scoped at kickoff. No retainer. Everything included: usage analysis, customer research, restructure scenarios, experiment designs, and readout.

Teams Jake has worked with

Gainify
Guardio
monday.com
Payoneer
thirdweb
Canary Mail
CircleUp

WHAT IS USUALLY GOING ON

Usage is growing but expansion revenue isn’t following

“Our heaviest users are on the mid-tier and they’re not upgrading. We don’t know if that’s because they don’t need more, or because the top tier doesn’t offer them enough to justify the jump.”

VP Product — B2B SaaS

Free plan users are active but paid conversion is lower than expected

“Free users love the product. They’re in it every day. But they’re not converting to paid at the rate we expected. We think we might have gated the wrong things, or not gated enough.”

Head of Growth — Series A SaaS

Enterprise deals always go custom — the tier structure doesn’t fit mid-market

“Every deal above a certain size ends up as a custom quote. The tiers work for SMB but nothing fits the buyers we actually want. Our pricing page is basically a starting point for negotiation.”

CEO — B2B SaaS, Series B

Pricing was set at launch and the product has changed significantly since

“We launched with a three-tier model and haven’t touched it since. The product looks nothing like what we launched. I don’t know if the pricing still makes sense and I’m not sure how to find out.”

CPO — B2B SaaS, Seed–Series A

WHY THIS IS DIFFERENT

Pricing changes based on instinct or abstract frameworks often miss what your customers actually value. This audit starts from what your customers actually do.

A pricing framework — good-better-best, value metric, usage-based — is rational in the abstract. But if it's not grounded in your customers' actual behaviour, it can solve for a problem they don't have or miss the pattern that drives upgrades in your specific product. Frameworks tell you how pricing should work. Your usage data tells you how it does work.

This audit starts with your feature usage by tier and Stripe transaction history. Then runs customer research to find out what users actually value enough to pay more for, and what they consider baseline expectations regardless of plan. The restructure scenarios are built from those findings — and because changing pricing affects every existing customer, each scenario comes with a conservative experiment design so you validate direction before committing a single change.

TIMELINE

From your current usage to a ready-to-run test.

WEEK 1

Usage + Research

Read-only access to your analytics tool and Stripe. Feature usage mapped by tier. Upgrade and downgrade history reviewed. Customer interviews and a feature classification survey run in parallel to surface what users value most and what they’ll pay more for.

WEEK 2

Scenarios + Designs

2–3 restructure scenarios built from the analysis, each directionally sized for impact. Experiment design written for the safest scenario to test first — segment, variant, measurement period, success metrics, rollback plan.

END OF SPRINT

Readout + Handover

90-minute live session with your CPO and growth lead. Every finding walked through with the data behind it. Scenarios reviewed. Which one to test first agreed before the call ends. Session recorded and shared.

Sprint complete: your team knows whether to change pricing, what to change, and how to test it.

WHAT YOU GET

Six deliverables. The complete picture from current usage to ready-to-run experiment design.

Week 1 · Usage
Current Pricing Performance Analysis

Where your tier structure matches how customers actually use the product — and where it doesn’t. Which features drive upgrades. Which tiers are absorbing users who probably belong elsewhere. Where the gap between usage and revenue is widest.

  • Feature usage by plan tier — where are the heaviest users concentrated?
  • Which features correlate with upgrade behaviour vs. which ones don’t
  • Plans where users are consistently over- or under-using relative to their tier
  • Stripe signals: upgrade timing, downgrade triggers, trial conversion patterns
Week 1 · Customer Research
Jobs-to-be-Done Mapping

What customers are actually hiring your product to do, and which features deliver the outcomes they care about most. Based on 3–5 structured interviews with current customers across your tier mix. The map often shows the job your product is being hired for differs from what your pricing structure is built around.

  • Primary jobs: outcomes customers are trying to achieve, ranked by importance
  • Which features are essential to the primary job vs. which are peripheral
  • Where the current tier structure enables the primary job vs. creates friction
  • Upgrade motivation: what change would make a customer willing to pay more
Week 1 · Feature Classification
Willingness-to-Pay Analysis

A structured survey deployed to your user base to classify your feature set: which features customers would pay more to access, which they consider baseline expectations regardless of tier, and which scale naturally with usage. The classification determines where gating makes sense and where it creates churn risk.

  • Features customers would upgrade to access — the logical gate candidates
  • Features customers expect at every tier — gating these creates objections, not upgrades
  • Features that scale with usage — suited to usage-based tiers
  • Where the current structure mismatches what customers are willing to pay for
Week 2 · Scenarios
2–3 Pricing Restructure Scenarios

Restructure options built from the usage analysis and customer research findings, checked against each other for risk and impact. Each is directionally sized and designed to be tested before committing. No fabricated revenue projections — direction and mechanism, grounded in your actual data.

  • What gates to move, what to surface differently, what to shift to usage-based pricing
  • Directional impact for each scenario: who upgrades, what the expansion motion looks like
  • Which scenario is safest to test first and why
  • Risks and dependencies for each scenario clearly stated
Week 2 · Testing
Experiment Designs for Each Scenario

One experiment design per scenario, so your team can run a controlled test rather than committing a pricing change to your full customer base. Every pricing change is irreversible for the customers who experience it. This gives you the test design before you open the door.

  • Which segment to test, what variant to run, what measurement period is needed
  • Success metrics: what movement in upgrade rate or MRR confirms the direction
  • Guardrail metrics: what not to break in the process
  • Rollback plan: how to revert if the test produces an unexpected result
Week 2 · Readout
90-Minute Strategy Readout

A live session with your CPO and growth lead. Usage findings and customer research walked through with the data behind them. Restructure scenarios reviewed and discussed. Your team leaves knowing which scenario to test first, how to run the test, and what to measure to confirm the direction is right.

  • Usage analysis and customer research explained with full context
  • Each restructure scenario walked through with rationale and risks
  • Test priority agreed: which scenario to run first and why
  • Session recorded and shared for anyone not in the room

On pricing changes and existing customers: every restructure scenario in this audit is designed to be testable before it applies to your full customer base. Pricing changes affect every existing customer simultaneously — that’s why the experiment design comes before the decision, not after.

FIT CHECK

Who gets the most from this audit — and who it doesn’t apply to.

This is for you if
  • B2B SaaS with an existing tier structure that hasn’t been stress-tested against usage data
  • Conversion to paid or expansion revenue is lower than your feature engagement suggests it should be
  • Self-serve conversion is lower than your feature engagement suggests it should be
  • Enterprise deals routinely go custom — the standard tiers don’t fit the buyers you want
  • You suspect pricing is leaving money on the table but don’t have the analysis to confirm it
  • You have some analytics data and Stripe history to work from
This is not for you if
  • Pre-launch — without usage data, the tier analysis has nothing to work from
  • No existing analytics data and no Stripe history — the quantitative side of the audit won’t run
  • You want projected ARR impact before the test runs — the scenarios are directional, not forecast models
  • You want the pricing change implemented for you — the audit delivers the analysis and test design; your team runs the change
Jake McMahon

Jake McMahon — ProductQuant

Jake McMahon
8+ years B2B SaaS · Behavioural Psychology + Big Data (Masters)

I run this audit myself. Pricing decisions are among the highest-stakes changes a product team makes — they affect every existing customer and every future prospect simultaneously. The analysis has to be grounded in what your customers actually do, not in what a pricing model says they should do.

The output is designed for a CPO conversation, not a finance model. The scenarios are directional, not projected. The experiment design is conservative — because the goal is to validate the direction before committing, not to justify a decision already made.

I won’t do this:
  • Project specific ARR impact from a pricing change — the variables are too specific to your customer mix to model reliably before a test
  • Recommend a restructure without an experiment design to validate it first
  • Apply a generic pricing framework without checking it against your usage data
  • Recommend wholesale pricing changes that require board approval before the direction is tested
  • Implement the pricing change in Stripe — the audit delivers the analysis and test design; your team executes
What data do you need from us?
Read-only access to your analytics tool (PostHog, Amplitude, Mixpanel, or similar) and Stripe. The analytics access covers feature usage by plan tier. Stripe provides behavioural signals: upgrade timing, downgrade triggers, and trial conversion patterns. If your instrumentation is limited, we work with what exists and document what to add. The customer research components — structured interviews and the feature classification survey — run regardless of instrumentation quality.

Teams Jake has worked with

Gainify
Guardio
monday.com
Payoneer
thirdweb
Canary Mail

PRICING

One price. Everything your team needs to act on.

$3,997–$5,997
one-time · fixed price
Scoped at kickoff · 2–3-week sprint
  • Current pricing performance analysis (tier usage, feature correlation, Stripe signals)
  • Jobs-to-be-done mapping (3–5 structured customer interviews)
  • Willingness-to-pay analysis (feature classification survey + analysis)
  • 2–3 pricing restructure scenarios with directional impact
  • Experiment design for each scenario (segment, variant, measurement, rollback)
  • 90-minute strategy readout with CPO and growth lead + recording
  • All assets stay with your team permanently

Everything built from your usage data and customer research. No templates applied out of context.

Book a 30-minute call →
Guarantee: If we don't deliver a clear recommendation on whether to change your pricing, what to change, and what it's worth to test — backed by your usage data and customer research — you get a full refund.

Questions.

Or book a call →
What data do you need from us? +
Read-only access to your analytics tool (PostHog, Amplitude, Mixpanel, or similar) and Stripe. The analytics access covers feature usage by plan tier. Stripe provides behavioural signals: upgrade timing, downgrade triggers, and trial conversion patterns. If your instrumentation is limited, we work with what exists and note what to add. The customer research components — structured interviews and the feature classification survey — run regardless of instrumentation quality. We handle recruitment and facilitation from your existing customer base.
We don’t have usage analytics set up. Can we still run this? +
Partially. Without feature usage data, the quantitative tier analysis shifts to focus on Stripe signals (upgrade timing, trial conversion, downgrade triggers) and qualitative customer research. The jobs-to-be-done and willingness-to-pay components run as normal. The usage analysis is lighter but still produces useful findings about where the pricing gaps are. We also document exactly what instrumentation to add so the next pricing review has the full data foundation.
How is this different from hiring a pricing consultant? +
This audit starts from your actual usage data, behavioural signals from Stripe, and structured research into what your specific customers are willing to pay more for. The scenarios are built from your upgrade patterns and feature usage, not from an abstract framework. And because the stakes of a pricing change are high, every scenario comes with a conservative experiment design you run before committing.
Do you implement the pricing change? +
No. The audit delivers the analysis, the scenarios, and the experiment design. Your team implements the change in Stripe and runs the test. If you want help designing a more complex pricing test — multi-cell, with control groups and measurement infrastructure — that’s a separate engagement. The audit gives you everything you need to run the test; implementation is your team’s work.
Will the scenarios include projected ARR impact? +
No. The scenarios use directional language throughout. Pricing impact depends on your specific customer mix, churn dynamics, and competitive context — variables that can’t be modelled reliably before a test runs. The scenarios show you the direction and the mechanism. The experiment design tells you how to measure whether the direction is right. Presenting a projected ARR lift before the test would be misleading.
What’s the guarantee? +
A clear recommendation on whether to change your pricing, what to change, and what it’s worth to test — backed by your usage data and customer research — or full refund. If the data can’t support a directional recommendation, we tell you in week 1 and scope what’s possible. We don’t reach the end of the sprint and deliver something that doesn’t answer the core question.

Know whether your pricing is capturing the value you create — and what to do if it isn’t.

Your tier usage mapped. Customer research done. Restructure scenarios built from your data. An experiment design so you validate the direction before your first customer notices a change.