FREE TOOL

Design your pricing experiment before you run it

Most pricing tests fail because they were underprepared — wrong hypothesis, wrong sample, no guardrails. Work through this worksheet first. 5 sections, 10 minutes.

1
Pricing
2
Hypothesis
3
Design
4
Risk
5
Success
01 / 05

Current pricing structure

Document your current pricing before you change anything. This is your control — you need to know exactly what you're testing against.

Add each plan or pricing tier. Leave price blank for free tiers.
Tier name Price / mo Key features included
02 / 05

Hypothesis builder

A good pricing hypothesis specifies what you're changing, which metric you expect to move, in which direction, by how much, and why. Vague hypotheses produce unactionable results.

If we [change], then [metric] will [direction] by [amount] because [reason].

Be specific about what changes — price point, tier structure, trial length, included features, billing model.
What's the causal theory? Why would this change produce this outcome?
03 / 05

Experiment design

Define your control, variant, required sample size, and estimated runtime. Underpowered experiments produce results you can't act on.

The current state — what you're testing against.
The change you're testing.
Enter your current baseline and desired minimum detectable effect.
Required sample per variant
Daily signups needed
Estimated runtime Enter values above
04 / 05

Risk assessment

Pricing experiments carry real downside risk. Quantify it before you run — and plan your rollback before you need it.

If conversion drops or ARPU falls, what's the worst-case monthly revenue impact?
How will you handle customers who ask why they're seeing a different price from a friend or competitor?
At what point do you halt the experiment early — before planned end date?
05 / 05

Success criteria

Define what winning looks like before you see the data — not after. Post-hoc success criteria are how confirmation bias enters pricing decisions.

One metric. The experiment either moves it meaningfully or it doesn't.
Metrics that must not regress — even if the primary metric improves. List each on a new line.
If the experiment is inconclusive — what's the default decision?
COMPLETE

Your pricing experiment plan

Review the full plan below. Use this as the brief to share with your team before starting the experiment.

Save your plan

Get a PDF of this experiment brief delivered to your inbox.

COMMON MISTAKES

What goes wrong when pricing experiments aren't designed properly

Underpowered experiments

Calling a result early — before you've reached your required sample size — means any difference you observe is likely noise. You end up making a permanent pricing decision based on a coin flip.

No guardrail metrics

A variant that improves trial conversion but drops ARPU 30% is a loss disguised as a win. Without guardrails, you only see the metric you wanted to improve — not the damage elsewhere.

Post-hoc hypothesis

Writing the hypothesis after you see the results is the single most common form of bias in product experimentation. If you haven't written it down before launch, you don't have a hypothesis.

NEED A PARTNER FOR PRICING EXPERIMENTS?

We design and analyse pricing experiments for B2B SaaS

From hypothesis design to statistical analysis to rollout decisions — we've run pricing experiments across multiple product categories and know how to separate signal from noise.

Talk to us about your pricing experiment →