Case Study — Amazon PPC Platform · Activation Strategy

45% of signups were stalling before ever activating. The retention advantage was real — but invisible to the users who needed it most.

An Amazon PPC automation platform had a statistically confirmed 1.8–1.9x retention advantage for users who activated automation within 31 days. Only 38% of signups made it that far. A structured activation audit identified 28 missing analytics events, mapped four distinct user segments, and produced a phased roadmap to unlock $2.5M+ in annual revenue.

45%
Signups never activated automation
28
Critical missing analytics events identified
1.9x
Retention lift confirmed for early activators
$2.5M+
Annual revenue impact from roadmap
Stack Amplitude Python JTBD

Before.

The platform had succeeded past $10M ARR with a powerful automation engine. Power users were highly engaged — users who created automation rules assigned them 6.6x on average; users who created Strategic Objectives assigned them 10.1x. The product’s value proposition was confirmed in the data for the users who got there.

The problem was that 45% of signups never got there. They registered, connected their marketplace, and stalled. The onboarding flow had three documented failure modes: a guided tour buried in a bottom-right widget that users reported never finding; tooltips that physically covered the UI elements they were supposed to explain; and a configuration model that assumed users already knew how to build automation rules manually — no opinionated defaults, no goal-oriented prompts. The platform was designed by experts, for experts. Beginners had no path.

Compounding the problem: 15 existing analytics events were instrumented, but they couldn’t answer the key questions. Was activation correlated with feature discovery or just with time-on-site? Which users were at risk vs. progressing? Without time-to-activation data or funnel sequencing, every retention hypothesis was unverifiable.

The Situation
  • 45% of signups never activated automation — “Stalled Starters”
  • Only 13% ever discovered Automate Assignment — the platform’s highest-converting feature
  • 38% signup → rule creation rate; no data on what caused the other 62% to stall
  • 28 critical analytics events missing — time-to-activation was invisible
  • Guided tour had documented failure modes — users couldn’t find it, couldn’t launch it

Four segments hiding in the signup cohort.

Baseline analysis of automation event data mapped every signup into one of four structural segments — each with a different problem and a different fix.

45%
Stalled Starters
Sign up, connect marketplace, never create a rule. The guided tour either wasn’t found or failed to launch. No automation rule = no retention advantage. This is the primary churn cohort — and the primary opportunity. Goal-oriented onboarding (choose a goal: Growth / Profitability / Cost Control) vs. configuration-first is the test to run.
25%
Slow Activators
Create a rule but take more than 31 days. Slower to activate means lower retention probability — the 1.8–1.9x advantage is time-bounded. Slow activators need a proactive intervention between day 7 and day 21: what’s the one rule most users in their segment create first? Show them that, don’t make them find it.
15%
Day-One Winners
Create a rule within 24 hours of signup. Highest retention cohort. They came with prior automation knowledge — they knew what to build. The onboarding experience serves them poorly (too much explanation they don’t need) but they push through because the product works. The design challenge: don’t break them while fixing Stalled Starters.
15%
Early Churners
Leave within 2 weeks, never activate. Different from Stalled Starters — these users left before they stalled. Primary driver: they expected a managed service or a simpler tool. The Quartile comparison is relevant here — white-glove onboarding eliminates this segment entirely; the platform’s self-serve model has a structural lower bound of early departures.

What we did.

A full activation audit: user segmentation, UX research, competitive analysis, JTBD framework, analytics gap analysis, and a phased implementation roadmap.

Step 1 — Baseline Automation Usage Analysis
Analysed 15 existing Amplitude events to establish the activation baseline. Key metrics from the analysis period (Oct 3–10): Weekly Active Users = 3,289; Rule creators = 147 (4.47% of WAU); Rule assigners = 308 (9.36% of WAU); Strategic Objective creators = 64 (1.95% of WAU — only 2% ever discover the platform’s most powerful feature); Scale Optimizer Automate clicks = 18 events in 7 days (0.36% of WAU). The 10.1x assignment-to-creation multiplier for Strategic Objectives confirmed product-market fit — but only for users who discovered it. Discovery was the problem, not the product.
Step 2 — UX Research & Transcript Analysis
Analysed a UX test recording with timestamped observations. Four friction points documented. At 19:04: guided tour hidden in bottom-right widget — user quote: “I would never have found this on my own.” At 23:39: tour found but failed to launch properly. At 28:40: second tour failure during a different flow. At 32:50: tooltip physically covering the UI element it was supposed to explain. At 39:20: Day Parting feature caused confusion — user understood the feature but not when/why to use it. The onboarding tour referenced one feature that didn’t exist in the current product version. These weren’t opinion-based critiques — they were documented failure events at specific timestamps.
Step 3 — Competitive Architecture Analysis
Compared the platform’s onboarding against two primary competitors. Quartile ($895–3K+/month): mandatory demos, 2–6 week setup, white-glove managed service. Eliminates Early Churners entirely — no self-serve path means no one leaves before understanding the product. But the price and process filter out the SMB segment that represents most of the platform’s TAM. Perpetua (self-serve, goal-oriented): Growth / Profitability / Brand Defence / Awareness — user selects a goal, platform configures automation accordingly. No manual rule-building required. Perpetua’s activation model removes the configuration complexity that was creating Stalled Starters. The strategic positioning available: “Intelligent Transparency” — match Perpetua’s simplicity on activation while exceeding Quartile’s visibility on automation decisions, preview features, and audit logs.
Step 4 — JTBD Framework
Synthesised the core job from 650 exit comments, 33 year-end surveys, and 2,914 Intercom support tickets. The functional core job: “Give me confidence that my PPC is optimised consistently so I can focus on growing my business without worrying about missed opportunities or wasted spend.” Five functional jobs identified: (1) eliminate daily manual bid adjustments; (2) maintain performance during high-volume events (Prime Day, peak seasons); (3) allocate budget intelligently across campaigns without constant review; (4) identify underperforming campaigns before they drain budget; (5) scale to more ASINs without proportionally more time. Primary emotional job: trust, control, and peace of mind — not automation for its own sake. The implication for onboarding: the user doesn’t want to build a rule, they want confidence that their PPC is covered. Goal-based prompts serve that job better than rule-configuration prompts.
Step 5 — Analytics Gap Analysis (28 Missing Events)
Mapped the 28 critical missing events across three priority tiers. Priority 1 (automation funnel, 12 events): first_automation_activated (time-to-value measurement — the single most important missing event); rule_configuration_started/abandoned (funnel optimisation); automate_assignment_viewed (discovery measurement); rule_deployed/activated (deployment success); rule_removed/disabled (churn signal). Priority 2 (advanced features, 8 events): Strategic Objective configuration start/completion; Day Parting creation/application; Mass Campaign creation. Priority 3 (engagement, 6 events): Scale Optimizer dashboard viewed; recommendation card viewed/dismissed; automation results reviewed. Without first_automation_activated, time-to-value measurement was impossible — the 31-day activation window had been defined as the critical threshold, but there was no way to measure who was hitting it.
Step 6 — Phased Implementation Roadmap
Three-phase roadmap with investment and ROI projections. Phase 1 (4 weeks, $150K, 2.5x projected ROI): critical event implementation + goal-oriented onboarding A/B test (Growth / Profitability / Cost Control vs. current configuration-first flow). Phase 2 (8 weeks, $200K, 4x projected ROI): Automate Assignment discovery boost (surface the feature at day 7 for Slow Activators) + results attribution system (show users what their rules actually did). Phase 3 (12 weeks, $250K, 6x projected ROI): AI-suggested first rule — based on the user’s category, margin profile, and current ACOS — plus full analytics dashboard suite. Total investment: $600K. Expected annual revenue impact at completion: $2.5M+.

Where users were — and weren’t.

Feature discovery rates as a percentage of weekly active users, seven-day window. The gap between rule creation (4.47%) and Strategic Objective assignment (6.08%) shows the platform’s most powerful feature is more used than its entry-level feature — by users who found it.

Feature Discovery (% WAU) Total Events Avg per User Signal
Rule Assignment
Core automation usage
9.36% 2,112 1.46x Strong Reuse
Strategic Obj. Assign
Most powerful feature
6.08% 1,454 1.51x 10.1x Multiplier
Scale Optimizer Fix Now
Recommendation acceptance
3.13% 314 1.65x Underutilised
Rule Creation
Entry-level action
4.47% 320 1.27x Low Repeat
Strategic Obj. Create
Feature initiation
1.95% 144 1.27x Hidden
Scale Optimizer Automate
Automation from recommendation
0.36% 18 1.50x Rarely Found

Data from Oct 3–10, 2025. WAU = 3,289 users. Strategic Objectives all-time: 144 created, 1,454 assigned — 10.1x multiplier confirms power user value; 1.95% create rate confirms discovery is the bottleneck, not the feature.

After.

45%
Stalled Starter rate quantified for the first time — was a known problem, now a measured baseline to run experiments against
1.9x
Retention lift confirmed for users who activate within 31 days — the value of fixing activation is now a number, not a hypothesis
28
Missing analytics events specified across three priority tiers — time-to-value measurement now has a clear implementation path
10.1x
Strategic Objectives assignment multiplier — most valuable feature, discovered by only 2% of WAU; discovery is the entire problem
$2.5M+
Annual revenue impact from three-phase roadmap — based on activation improvement, discovery lift, and AI-assisted first-rule recommendation
$150K
Phase 1 investment — critical events + goal-oriented onboarding A/B test. Projected 2.5x ROI. Four weeks to data.

What you can do now.

Your retention advantage is confirmed and measurable. The 1.8–1.9x lift for users who activate within 31 days is statistically significant. The question is what fraction of your signups get there. With first_automation_activated instrumented, you know the answer in real time instead of estimating it from aggregate cohort data.

The goal-oriented onboarding test is the highest-leverage experiment available. Perpetua’s model (select a goal, get a configured automation) addresses the primary failure mode for 45% of your signups without changing the underlying product. The A/B test setup is specified. The success metric is clear: does goal-oriented onboarding move rule creation from 38% closer to 65%?

Strategic Objectives is your best feature and your biggest discovery problem. 1.95% create rate. 10.1x assignment multiplier for users who find it. The gap between those two numbers is the opportunity. Surface it at day 7 for Slow Activators. The intervention is simple; the impact on the 25% of your user base in that segment is not.

Jake McMahon
Jake McMahon
ProductQuant

10 years building growth systems for B2B SaaS companies at $1M–$50M ARR. BSc Behavioural Psychology, MSc Data Science. This engagement combined UX transcript analysis, competitive architecture comparison, JTBD synthesis from 3,500+ data points, analytics gap analysis, and a three-phase implementation roadmap — all anchored to the single question that mattered: what’s preventing the 45% from becoming the 15%?

What this looks like for your company

Activation Deep Dive.

Two weeks to map your activation funnel end-to-end, confirm where it breaks with data, identify your top three fixes ranked by impact, and agree on an activation definition tied to retention.

  • Full activation funnel mapped from signup to aha moment with completion rates at each step
  • Drop-off points confirmed with data — cohort breakdowns by plan, channel, and user type
  • Top 3 fixes ranked by revenue impact: quick wins separated from structural changes
  • Activation event defined and validated against 30-day retention; baseline established
$4,997 · 2 weeks
Right for you if
  • Activation rate below 40% or declining — users signing up but not reaching value
  • Multiple user types with radically different starting points, goals, and prior experience
  • Know people are dropping off but can’t pinpoint where in the funnel or why

Know your retention advantage is real. Can’t get users to it?

If your power users have a measurably better outcome than your average user, but most users never become power users, that’s an activation problem before it’s a product problem. An activation audit — segment analysis, event gap review, competitive architecture — typically takes 4–6 weeks. The conversation to scope it takes 15 minutes.