Marketing analytics platform for e-commerce brands — ~30–50 person company, Series A. The VP of Product knew activation was broken. What they didn't know: the root cause was invisible.
The VP of Product knew activation was low — roughly 20% of new users completed the setup process. But they couldn't explain why. The team had a hypothesis that the onboarding flow was too long, but A/B testing shorter flows produced no meaningful improvement.
What they didn't know: the activation funnel had analytics coverage on the first three steps and then nothing. The team couldn't see where users went after step three because no events were being tracked below that point. They were trying to optimise a funnel they couldn't see.
Worse: the platform's highest-value feature — automated bid optimization across multiple marketplaces — had a 13% organic discovery rate and zero tracking. Users who discovered 3+ features retained at 1.8× the rate, but nobody was measuring feature adoption at all. The retention multiplier was a hidden superpower with no instrumentation.
The team ran A/B tests reducing the number of setup steps, assuming users were dropping off because the flow was too long.
They tried tooltip guides and in-app messaging to push users toward key features.
Why it didn't work: Both attempts addressed symptoms, not the root cause. The problem wasn't the onboarding flow — it was the complete absence of analytics coverage below the first three funnel steps and across the entire feature adoption surface. You can't optimise what you can't measure.
Working through their data, the real problem was not the obvious guess. The VP of Product assumed the onboarding flow itself was broken. The data told a different story.
The team had instrumented the first three activation steps (signup, account creation, data source connection) but nothing after. 40+ critical events were entirely missing from the analytics setup — including the core action that defined activation for their product. The funnel appeared to drop off at step four only because step four was never tracked. Users were completing setup; the analytics just weren't recording it.
The automated bid optimization feature — the company's highest-value, highest-retention capability — had an organic discovery rate of just 13%. Users who found it retained at 1.8× the rate. But without tracking, the team had no visibility into whether users were discovering features at all. A retention multiplier this strong was being left entirely to chance.
Analysis of existing payment data revealed that users who engaged with 3+ distinct product features had a 1.8× retention rate compared to single-feature users. This was the strongest retention signal in the data — and it had never been measured because feature usage was not instrumented. The team's north star was setup completion when it should have been feature depth.
A complete rebuild of the analytics infrastructure, event taxonomy, and activation strategy — grounded in what the data actually revealed.
Pricing tiers audited
Before vs After metrics with quantified revenue impact.
We knew activation was broken, but we didn't know where. The audit showed us the exact events missing, the exact features users weren't finding, and exactly what each fix was worth. That clarity changed how we prioritize everything.
The most damaging analytics problem isn't bad data. It's missing data. This team was trying to optimise an activation funnel they couldn't see past step three. The highest-value feature had zero tracking. A 1.8× retention lever was invisible because feature adoption was never instrumented. Finding what's absent is usually more valuable than analysing what's present — because you can't improve what you don't know exists.
By seller type, by cohort, and by acquisition source. Every drop-off is visible, measurable, and attributable. No more guessing where the funnel breaks.
The retention multiplier for full-feature users is in a live dashboard. Feature discovery is managed as a product experience, not left to chance.
Activation improvement, feature discovery uplift, and retention compound are modeled as a prioritized experiment backlog. Every fix has a dollar value attached.
10 years building analytics and growth systems for B2B SaaS at $1M–$50M ARR. BSc Behavioural Psychology, MSc Data Science. The most common analytics gap isn't bad data — it's missing data. Events never instrumented, properties never attached, funnels never connected. Finding what's absent is usually more valuable than analysing what's present.
A structured review of your event taxonomy, activation funnel, and analytics gaps — finding what's missing, sizing the revenue impact, and delivering a roadmap for what to instrument next.
A 15-minute call is enough to know whether what we do is relevant to where you are. No pitch. Just a conversation about your specific situation.