PRICING AUDIT
Know whether your pricing captures the value you create, what to change if it doesn't, and how to test it before your customers notice. You finish with a clear recommendation and a ready-to-run experiment design.
Fixed price · experiment design included · clear recommendation or full refund
WHAT YOU HAVE AT THE END
Fixed price · experiment design included · clear recommendation or full refund
You get a clear report showing where you're leaving money on the table, which tiers aren't working, and exactly what to adjust.
REVENUE LEAK
Your best customers are all on your cheapest plan.
We find that your highest-usage accounts pay the same as everyone else. You get a specific recommendation for a usage-based tier that captures the value you're already delivering.
CONVERSION DROP
Prospects visit your pricing page and leave without signing up.
We analyse where visitors drop off and which plan names or feature lists confuse them. You get a redesigned pricing structure that removes the friction.
EXPANSION REVENUE
Existing customers never upgrade because there's no reason to.
We identify the usage thresholds where customers should naturally move up. You get upgrade triggers and packaging changes that make expansion feel obvious, not pushy.
COMPETITIVE RISK
Your sales team keeps losing deals on price.
We compare your packaging against what buyers actually value. You get a positioning shift that reframes the conversation from cost to outcome — so price becomes secondary.
You leave knowing whether to change your pricing, what to change, and what it’s worth to test. Not a list of options with no priority.
Pricing changes affect every existing customer. Each restructure scenario comes with an experiment design so you validate direction before committing.
Scoped at kickoff. No retainer. Everything included: usage analysis, customer research, restructure scenarios, experiment designs, and readout.
Teams Jake has worked with





WHAT IS USUALLY GOING ON
Usage is growing but expansion revenue isn’t following
“Our heaviest users are on the mid-tier and they’re not upgrading. We don’t know if that’s because they don’t need more, or because the top tier doesn’t offer them enough to justify the jump.”
VP Product — B2B SaaS
Free plan users are active but paid conversion is lower than expected
“Free users love the product. They’re in it every day. But they’re not converting to paid at the rate we expected. We think we might have gated the wrong things, or not gated enough.”
Head of Growth — Series A SaaS
Enterprise deals always go custom — the tier structure doesn’t fit mid-market
“Every deal above a certain size ends up as a custom quote. The tiers work for SMB but nothing fits the buyers we actually want. Our pricing page is basically a starting point for negotiation.”
CEO — B2B SaaS, Series B
Pricing was set at launch and the product has changed significantly since
“We launched with a three-tier model and haven’t touched it since. The product looks nothing like what we launched. I don’t know if the pricing still makes sense and I’m not sure how to find out.”
CPO — B2B SaaS, Seed–Series A
WHY THIS IS DIFFERENT
Pricing changes based on instinct or abstract frameworks often miss what your customers actually value. This audit starts from what your customers actually do.
A pricing framework — good-better-best, value metric, usage-based — is rational in the abstract. But if it's not grounded in your customers' actual behaviour, it can solve for a problem they don't have or miss the pattern that drives upgrades in your specific product. Frameworks tell you how pricing should work. Your usage data tells you how it does work.
This audit starts with your feature usage by tier and Stripe transaction history. Then runs customer research to find out what users actually value enough to pay more for, and what they consider baseline expectations regardless of plan. The restructure scenarios are built from those findings — and because changing pricing affects every existing customer, each scenario comes with a conservative experiment design so you validate direction before committing a single change.
TIMELINE
Read-only access to your analytics tool and Stripe. Feature usage mapped by tier. Upgrade and downgrade history reviewed. Customer interviews and a feature classification survey run in parallel to surface what users value most and what they’ll pay more for.
2–3 restructure scenarios built from the analysis, each directionally sized for impact. Experiment design written for the safest scenario to test first — segment, variant, measurement period, success metrics, rollback plan.
90-minute live session with your CPO and growth lead. Every finding walked through with the data behind it. Scenarios reviewed. Which one to test first agreed before the call ends. Session recorded and shared.
Sprint complete: your team knows whether to change pricing, what to change, and how to test it.
WHAT YOU GET
Where your tier structure matches how customers actually use the product — and where it doesn’t. Which features drive upgrades. Which tiers are absorbing users who probably belong elsewhere. Where the gap between usage and revenue is widest.
What customers are actually hiring your product to do, and which features deliver the outcomes they care about most. Based on 3–5 structured interviews with current customers across your tier mix. The map often shows the job your product is being hired for differs from what your pricing structure is built around.
A structured survey deployed to your user base to classify your feature set: which features customers would pay more to access, which they consider baseline expectations regardless of tier, and which scale naturally with usage. The classification determines where gating makes sense and where it creates churn risk.
Restructure options built from the usage analysis and customer research findings, checked against each other for risk and impact. Each is directionally sized and designed to be tested before committing. No fabricated revenue projections — direction and mechanism, grounded in your actual data.
One experiment design per scenario, so your team can run a controlled test rather than committing a pricing change to your full customer base. Every pricing change is irreversible for the customers who experience it. This gives you the test design before you open the door.
A live session with your CPO and growth lead. Usage findings and customer research walked through with the data behind them. Restructure scenarios reviewed and discussed. Your team leaves knowing which scenario to test first, how to run the test, and what to measure to confirm the direction is right.
On pricing changes and existing customers: every restructure scenario in this audit is designed to be testable before it applies to your full customer base. Pricing changes affect every existing customer simultaneously — that’s why the experiment design comes before the decision, not after.
FIT CHECK
Jake McMahon — ProductQuant
I run this audit myself. Pricing decisions are among the highest-stakes changes a product team makes — they affect every existing customer and every future prospect simultaneously. The analysis has to be grounded in what your customers actually do, not in what a pricing model says they should do.
The output is designed for a CPO conversation, not a finance model. The scenarios are directional, not projected. The experiment design is conservative — because the goal is to validate the direction before committing, not to justify a decision already made.
Teams Jake has worked with




PRICING
Everything built from your usage data and customer research. No templates applied out of context.
Book a 30-minute call →Your tier usage mapped. Customer research done. Restructure scenarios built from your data. An experiment design so you validate the direction before your first customer notices a change.