GROWTH LAB — MONTHLY RETAINER

Jake McMahon
Jake McMahon — ProductQuant
8+ years B2B SaaS · Behavioural Psychology + Big Data (Masters)

Product decisions backed by evidence that accumulates, not analysis that resets.

Growth LAB is a monthly retainer where Jake runs the analysis, experimentation, and churn prediction. Your team executes. Better activation, retained accounts, and confident product decisions — every month, not just after a one-time sprint.

3-month minimum engagement · Month-to-month after · Scope confirmed before start

WHAT RUNS EVERY MONTH

Experiment engine Hypotheses formed, tests designed, results analysed every cycle
Churn prediction At-risk accounts surfaced before they cancel, with context for CS
Activation analysis Where signups stall and what to fix next, updated monthly
Competitive monitoring Pricing and positioning shifts tracked before your customers see them
Monthly Growth report Results from the previous month + recommended next moves

3-month minimum engagement · Scope confirmed before start

We build a monthly system for smarter product decisions.

You get a clear, ongoing analysis of what your users do and why they stay or leave. This helps your team focus on what works.

USER ONBOARDING

Your PM asks, 'Why do so many new users drop off after step 3?'

We analyze exactly where people get stuck and test a simpler flow. You see a 15% increase in completed sign-ups the next month.

CUSTOMER SUPPORT

Your support lead says, 'We're getting the same feature request over and over.'

We quantify how many paying users are asking for it and how it impacts retention. You get a clear priority list for your next development cycle.

WEEKLY REPORTING

Your CEO asks, 'Is our latest feature actually being used?'

We track adoption and tie it directly to account renewal rates. You get a simple dashboard showing what's working and what's not.

SALES CONVERSATIONS

A sales rep asks, 'Which accounts are most likely to churn next quarter?'

We identify warning signs from usage data and create a watchlist. Your team can reach out proactively to save revenue.

ENGAGEMENT
Monthly retainer

Jake runs the analysis, experimentation, and churn prediction. Your team executes. No project handoffs, no starting over.

GUARANTEE
30-day progress

If churn doesn’t reduce by [X]% AND we haven’t identified 3+ actionable revenue cohorts by day 30, we extend month 2 at no cost. You keep all analysis and models regardless.

ENGAGEMENT
Monthly Retainer

3-month minimum, then month-to-month. Everything built stays with your team permanently.

Teams Jake has worked with

Gainify
Guardio
monday.com
Payoneer
thirdweb
Canary Mail
An ongoing growth function beats a quarterly offsite every time. The difference is measurement — knowing which experiments moved the number before the quarter ends.
Growth LAB Quarterly Growth Offsite Hire a Growth PM Generalist Agency
Cadence Monthly retainer Quarterly Depends on hire Project-based
Analysis continuity Compounds month to month Resets each quarter If the hire stays Resets per project
Churn prediction Weekly at-risk list Not covered Rare as part of the role Not included
Experiment velocity 3–6/month 1–2/quarter 1–2/quarter initially Varies
Competitive intel Weekly alerts + monthly brief Manual before offsite Depends on bandwidth One-time report
Monthly investment from $4,997/mo $15K–$30K per offsite $12K–$20K/mo + equity $10K–$30K/mo

Ready to make growth compound?

Evidence that accumulates, not analysis that resets.

Jake runs the analysis, experimentation, and churn prediction every month. Your team executes. Better activation, retained accounts, and confident product decisions — every month.

WHY TEAMS COME TO GROWTH LAB

Experiments proposed every sprint, shipped by almost none

“We talk about running experiments every planning meeting. But between figuring out the hypothesis, getting buy-in, and waiting for enough traffic, nothing ever gets called. The data keeps accumulating and we keep making decisions on gut feel.”

VP Product — B2B SaaS

Churn shows up as a surprise every time

“We see the cancellation email and then start digging through usage data trying to figure out what happened. The signals were there three months ago but nobody was watching for them. By the time we find out, the conversation is already over.”

Head of Customer Success — Series B SaaS

One growth hire can’t do everything at once

“We hired a growth PM and within a month they were triaging analytics, setting up tracking, writing experiment briefs, and trying to do competitive research on the side. Six months in they’re still not running reliable experiments. It’s not their fault. The scope is just unrealistic for one person.”

CEO — Seed-to-Series A

WHAT RUNS EVERY MONTH

Five capabilities running in parallel — each one informed by the others.

Ongoing · Experiments
Experiment Engine

Jake designs, runs, and reads out experiments against your activation and retention metrics. Each test starts with a confirmed hypothesis, a calculated sample size, and a defined success metric — so results are definitive rather than arguable.

  • Hypotheses grounded in your behavioural data, not assumptions
  • Power analysis run before each test launches
  • Results documented so each test builds on the previous one
  • Winning variants handed to your team with implementation notes
Ongoing · Churn Prediction
Weekly At-Risk Account List

A predictive model trained on your engagement and usage data flags accounts showing early decline patterns before they reach the cancellation decision. Your CS team gets a named list each week — accounts to contact, with context on why each one surfaced.

  • Model trained on your specific product behaviour data
  • Updated and refined monthly as new behavioural data comes in
  • Each flagged account includes the signal that triggered it
  • Intervention notes from experiment wins feed back into CS playbooks
Ongoing · Activation
Monthly Activation Analysis

Where signups are stalling, which cohorts are activating fastest, and what the data says to prioritise next. Not a one-time map — a view that updates as your product and traffic change, so the next experiment is always pointed at the right problem.

  • Activation funnel reviewed and updated each month
  • Cohort breakdowns by plan, channel, and acquisition date
  • Time-to-activation trends tracked across months
  • Top stall point identified and connected to the experiment queue
Ongoing · Competitive
Competitive Intelligence Monitoring

Pricing changes, messaging shifts, and product announcements from your key competitors surface before they reach your sales team or your customers. Weekly alerts in Slack. Monthly brief with positioning implications. Quarterly battle card refresh.

  • Competitor set agreed in month one, updated as the market shifts
  • Weekly Slack digest: what changed and what it might mean
  • Monthly brief: positioning gaps identified with recommended responses
  • Quarterly battle cards updated with current differentiation points
Monthly · Synthesis
Monthly Growth LAB Report

A written summary of everything that ran last month: experiment results, churn model findings, activation changes, and competitive shifts. Accompanied by the recommended priority order for the next month. Your team reads one document and knows exactly where to point engineering and CS capacity next.

  • Experiment results with statistical outcomes, not just directional reads
  • Churn model performance and accounts successfully intervened on
  • Activation funnel changes month-over-month with explanation
  • Prioritised next actions: what to ship, intervene on, or investigate

The outcome: A connected system where insights from churn prediction inform activation experiments, and winning experiments become playbooks for customer success. The intelligence compounds, so each month's decisions are sharper than the last.

HOW IT WORKS

Month one builds the foundation. Every month after sharpens it.

Month 1 — Discover and baseline
Analytics reviewed, instrumentation gaps identified, and the churn model trained on your existing data. First experiment hypotheses confirmed from your activation funnel. Competitive set agreed. The first month establishes the foundation: at-risk lists go to CS, experiments are queued, and competitive alerts are running. Your team ends month one with a clear picture of where to push next.
Months 2–3 — Experiments running, model sharpening
Experiments launched from the queue. At-risk lists reviewed for accuracy — CS feedback loops back into the model. Activation analysis updated as cohorts mature. Competitive monitoring in steady-state. Monthly report delivered with results and the next priority list.
Month 3 and beyond — Compounding
Winning experiment variants feed into playbooks. The churn model gets more accurate with more data. Activation improvements from earlier experiments become the baseline for the next ones. The evidence accumulates. Product decisions that used to require a debate now have data behind them.

FIT CHECK

Teams with execution capacity and a data foundation get the most from this.

GOOD FIT
Post-PMF B2B SaaS with product and CS capacity to execute
Analytics in place · team that can ship weekly

You have event tracking, a product team that ships regularly, and a CS team that handles existing accounts. What you don’t have is someone who connects the data across those functions monthly — running experiments, flagging at-risk accounts early, and keeping the activation analysis current. The bottleneck isn’t talent. It’s capacity and the system that connects the pieces.

  • Experiments running on a regular cadence with results your team can act on
  • Churn model producing a weekly at-risk list your CS team uses
  • Activation analysis updated monthly, pointed at the current bottleneck
  • Competitive intelligence in your team’s inbox before it matters

Product decisions backed by evidence that accumulates, not analysis that resets every engagement.

NOT A FIT
No analytics in place, no execution capacity, or still finding PMF
Wrong stage or wrong foundation

The Growth LAB runs on your event data. If your analytics tool has fewer than a few months of reliable data, the churn model won’t be accurate enough to be useful, and the activation analysis won’t have enough history to trend. If your engineering team can’t ship experiment variants weekly, the experiment engine stalls. And if you’re still testing whether the product works for a specific market, the bottleneck is discovery — not the growth operation.

Jake McMahon

Jake McMahon — ProductQuant

Jake McMahon
8+ years B2B SaaS · Behavioural Psychology + Big Data (Masters)

I run the Growth LAB myself. The experiment design, the churn model, the activation analysis, the competitive monitoring — every piece. Your team ships the variants, contacts the at-risk accounts, and responds to the competitive moves. I produce the intelligence and the tests. You produce the results.

Most retainers hand you a report and disappear. The LAB is designed so that at the end of the engagement, your team owns everything: the dashboards, the model, the experiment library, the playbooks. Nothing depends on ProductQuant continuing. You’re building internal evidence, not renting external analysis.

I won’t do this:
  • Run experiments without calculating statistical power upfront
  • Deliver a churn model without telling you where it’s confident and where it isn’t
  • Produce competitive analysis without connecting it to a positioning decision your team can make
  • Keep the methodology or models proprietary — everything transfers with you
How much time does this take from my team?
A consistent weekly commitment from your product and CS teams to implement experiment variants, act on the at-risk list, and sync on results. Jake handles the analysis and system work; your team focuses on execution. Jake does the analysis, design, and system work. Your team does the execution.

Teams Jake has worked with

Gainify
Guardio
monday.com
Payoneer
thirdweb
Canary Mail

PRICING

Monthly analysis, experimentation, and churn prediction — running in parallel.

from $6,997/mo
3-month minimum · month-to-month after
Scope confirmed before engagement starts
  • Experiment engine: hypotheses designed, tests run, results read out monthly
  • Churn prediction model trained on your data, updated monthly
  • Weekly at-risk account list delivered to CS with context
  • Monthly activation analysis: where signups stall and what to prioritise next
  • Competitive monitoring with weekly Slack alerts and monthly brief
  • Monthly Growth LAB report: results and next-month priority order
  • Weekly async communication on results and upcoming tests
  • All dashboards, models, and playbooks transferred to your team permanently

Exact scope — and price — confirmed after a conversation about your current data, team, and priorities.

Book a 30-minute call →

30-Day Progress Guarantee: If churn doesn’t reduce by [X]% AND we haven’t identified 3+ actionable revenue cohorts by day 30, we extend month 2 at no cost.

You keep all analysis documents, models, and playbooks either way — nothing is held back.

Questions.

Or book a call →
How is this different from Growth OS? +
Growth LAB is designed for teams with the execution capacity to implement. Jake runs the analytical engine — experiments, churn prediction, activation analysis, competitive monitoring — and your team handles implementation. If your situation requires a strategist to own the entire growth function, including deeper strategy and capability-building, we can discuss a different engagement model.
Can we start with a sprint and move to Growth LAB after? +
Yes, and it’s a common path. Running the Activation Deep Dive or Focused Sprint first means month one of Growth LAB starts with a confirmed diagnosis rather than a baseline audit. All the work from the sprint carries over. You’re moving from one-time analysis to ongoing intelligence, not starting again.
What analytics tools do you work with? +
PostHog, Amplitude, Mixpanel, and most standard event-tracking platforms. For session replays: Hotjar, FullStory, or equivalent. For the churn model: your event data plus Stripe or your billing system. Read-only access is all that’s needed — no write access, no integration work required from your engineering team.
What happens after the 3-month minimum? +
The engagement converts to month-to-month. You can continue indefinitely, pause, or stop. Everything built during the engagement — the dashboards, the churn model, the experiment library, the playbooks — stays with your team. There are no licensing fees and no proprietary models that require ProductQuant to maintain.
What if our analytics instrumentation is incomplete? +
Month one includes an instrumentation review. Gaps are identified and prioritised so your engineering team fixes the most important ones first. The churn model and activation analysis start with what exists — they get more precise as instrumentation improves. You don’t need perfect data to begin. You need enough to find the signal.
How is the price determined? +
The starting point is $6,997/month for the core engagement: experiment engine, churn prediction, activation analysis, competitive monitoring, and monthly report. Scope — and final price — is confirmed after a conversation about your current data, team capacity, and what you want to move. Larger data sets, more complex competitive landscapes, or expanded scope push the price toward the upper end. The conversation is free.

Better activation, retained accounts, and product decisions backed by evidence — every month.

Jake runs the analysis, experimentation, and churn prediction. Your team executes. The evidence accumulates instead of resetting.