An Amazon PPC automation platform had a statistically confirmed 1.8–1.9x retention advantage for users who activated automation within 31 days. Only 38% of signups made it that far. A structured activation audit identified 28 missing analytics events, mapped four distinct user segments, and produced a phased roadmap to unlock $2.5M+ in annual revenue.
The platform had succeeded past $10M ARR with a powerful automation engine. Power users were highly engaged — users who created automation rules assigned them 6.6x on average; users who created Strategic Objectives assigned them 10.1x. The product’s value proposition was confirmed in the data for the users who got there.
The problem was that 45% of signups never got there. They registered, connected their marketplace, and stalled. The onboarding flow had three documented failure modes: a guided tour buried in a bottom-right widget that users reported never finding; tooltips that physically covered the UI elements they were supposed to explain; and a configuration model that assumed users already knew how to build automation rules manually — no opinionated defaults, no goal-oriented prompts. The platform was designed by experts, for experts. Beginners had no path.
Compounding the problem: 15 existing analytics events were instrumented, but they couldn’t answer the key questions. Was activation correlated with feature discovery or just with time-on-site? Which users were at risk vs. progressing? Without time-to-activation data or funnel sequencing, every retention hypothesis was unverifiable.
Baseline analysis of automation event data mapped every signup into one of four structural segments — each with a different problem and a different fix.
A full activation audit: user segmentation, UX research, competitive analysis, JTBD framework, analytics gap analysis, and a phased implementation roadmap.
first_automation_activated (time-to-value measurement — the single most important missing event); rule_configuration_started/abandoned (funnel optimisation); automate_assignment_viewed (discovery measurement); rule_deployed/activated (deployment success); rule_removed/disabled (churn signal). Priority 2 (advanced features, 8 events): Strategic Objective configuration start/completion; Day Parting creation/application; Mass Campaign creation. Priority 3 (engagement, 6 events): Scale Optimizer dashboard viewed; recommendation card viewed/dismissed; automation results reviewed. Without first_automation_activated, time-to-value measurement was impossible — the 31-day activation window had been defined as the critical threshold, but there was no way to measure who was hitting it.Feature discovery rates as a percentage of weekly active users, seven-day window. The gap between rule creation (4.47%) and Strategic Objective assignment (6.08%) shows the platform’s most powerful feature is more used than its entry-level feature — by users who found it.
| Feature | Discovery (% WAU) | Total Events | Avg per User | Signal |
|---|---|---|---|---|
| Rule Assignment Core automation usage |
9.36% | 2,112 | 1.46x | Strong Reuse |
| Strategic Obj. Assign Most powerful feature |
6.08% | 1,454 | 1.51x | 10.1x Multiplier |
| Scale Optimizer Fix Now Recommendation acceptance |
3.13% | 314 | 1.65x | Underutilised |
| Rule Creation Entry-level action |
4.47% | 320 | 1.27x | Low Repeat |
| Strategic Obj. Create Feature initiation |
1.95% | 144 | 1.27x | Hidden |
| Scale Optimizer Automate Automation from recommendation |
0.36% | 18 | 1.50x | Rarely Found |
Data from Oct 3–10, 2025. WAU = 3,289 users. Strategic Objectives all-time: 144 created, 1,454 assigned — 10.1x multiplier confirms power user value; 1.95% create rate confirms discovery is the bottleneck, not the feature.
Your retention advantage is confirmed and measurable. The 1.8–1.9x lift for users who activate within 31 days is statistically significant. The question is what fraction of your signups get there. With first_automation_activated instrumented, you know the answer in real time instead of estimating it from aggregate cohort data.
The goal-oriented onboarding test is the highest-leverage experiment available. Perpetua’s model (select a goal, get a configured automation) addresses the primary failure mode for 45% of your signups without changing the underlying product. The A/B test setup is specified. The success metric is clear: does goal-oriented onboarding move rule creation from 38% closer to 65%?
Strategic Objectives is your best feature and your biggest discovery problem. 1.95% create rate. 10.1x assignment multiplier for users who find it. The gap between those two numbers is the opportunity. Surface it at day 7 for Slow Activators. The intervention is simple; the impact on the 25% of your user base in that segment is not.
10 years building growth systems for B2B SaaS companies at $1M–$50M ARR. BSc Behavioural Psychology, MSc Data Science. This engagement combined UX transcript analysis, competitive architecture comparison, JTBD synthesis from 3,500+ data points, analytics gap analysis, and a three-phase implementation roadmap — all anchored to the single question that mattered: what’s preventing the 45% from becoming the 15%?
Two weeks to map your activation funnel end-to-end, confirm where it breaks with data, identify your top three fixes ranked by impact, and agree on an activation definition tied to retention.
If your power users have a measurably better outcome than your average user, but most users never become power users, that’s an activation problem before it’s a product problem. An activation audit — segment analysis, event gap review, competitive architecture — typically takes 4–6 weeks. The conversation to scope it takes 15 minutes.