B2B analytics data platform — ~$3.5M ARR, ~40 employees, seed stage. The Head of Product knew activation was failing. The root cause was deeper than anyone suspected.
The Head of Product knew something was wrong: only 14% of new users ever reached the "aha" moment that defined product activation. But the team had no funnel data to diagnose the problem. The activation metric itself was a single number — a binary "did they or didn't they" flag — with no supporting breakdown.
What they didn't know: the 14% activation rate masked three completely different stories. For the core analytics persona, activation was about running a first query. For the reporting persona, it was about creating a dashboard. For the admin persona, it was about configuring a data pipeline. All three were measured as the same event, and none was instrumented correctly.
Worse, the average user had to complete 14 steps before seeing any meaningful value from the platform. The first five steps were signup and configuration — purely cost, no benefit. Users who dropped off between step eight and step twelve never understood what the product could do for them. The team was asking users to invest hours before delivering a single moment of value.
The team built a multi-touch email sequence designed to nudge users through setup steps over the first 30 days.
They created walkthrough videos showing users how to complete key workflows, embedded at key moments in the onboarding flow.
The team reduced the initial signup form from 8 fields to 3, hoping to reduce friction at the very start of the funnel.
Why it didn't work: Every attempt treated activation as a single problem with a single fix. The reality was that three different user personas had three different definitions of "activated" and none of them was being measured correctly. The team was optimising a funnel they had never instrumented.
When we mapped events to user behavior across all segments, the real problem emerged. It wasn't one problem. It was three.
The team defined "activation" as one binary event: "completed setup." But the analytics persona ran a query, the reporting persona created a dashboard, and the admin persona configured a data pipeline. All three counted as "activated" in the same bucket. The 14% headline number was an average of three wildly different persona completion rates: the analytics persona sat at 22%, the reporting persona at 11%, and the admin persona at 8%. The product was failing three different groups in three different ways, and the single metric hid every signal.
For every persona, real activation — the moment users understood the product's value — required three connected actions, not one. The analytics persona needed to (1) connect a data source, (2) write a query, and (3) see results rendered as a chart. The single "activated" event only fired when all three were completed, so there was no visibility into where within that sequence users were dropping off. The real bottleneck was invisible because the intermediate steps were never tracked as distinct events.
The complete onboarding flow contained 14 steps — account creation, email verification, workspace setup, data source connection, schema mapping, user invite flow, role assignment, API key generation, sample data import, query builder intro, first query attempt, error handling, dashboard creation, and then finally seeing a result. Users completed an average of 8.2 steps before abandoning. They never reached the value moment because they exhausted their patience before the product delivered anything useful.
A complete rebuild of the activation strategy — persona-specific, measurement-first, and designed to reduce time-to-value from weeks to days.
Persona activation paths defined
Before vs After metrics with quantified revenue impact.
We had been treating activation like a single problem with a single solution for two years. The audit showed us we actually had three completely different problems, and none of them was being measured. Once we saw the persona splits, the fix was almost obvious. The hard part was admitting our single metric was hiding everything that mattered.
A single activation metric that averages across personas is worse than no metric at all. This team's 14% activation rate was hiding three different stories — a 22% analytics segment, an 11% reporting segment, and an 8% admin segment — each requiring a completely different fix. The act of segmenting the metric by persona revealed more than any A/B test or user interview had ever done. When your activation definition doesn't match how your users actually experience value, you're optimising a fictional product.
Not one number. A segmented view showing exactly which persona is struggling, at which step, and why. No more hiding behind averages.
Three connected actions tracked independently. You'll know whether users fail at step one, step two, or step three — and can target the specific friction point.
Every fraction of activation improvement is tied to a dollar value. The roadmap is prioritized by revenue impact, not opinion.
10 years building analytics and growth systems for B2B SaaS at $1M–$50M ARR. BSc Behavioural Psychology, MSc Data Science. The most common activation failure isn't a bad product — it's an activation definition that doesn't match how different user personas actually experience value. When one metric hides three different problems, the fix starts with segmentation.
A structured review of your activation funnel, persona definitions, and event taxonomy — finding the hidden segments, sizing the revenue impact, and delivering a roadmap for measurable improvement.
A 15-minute call is enough to know whether what we do is relevant to where you are. No pitch. Just a conversation about your specific situation.