Activation

Activation Is Not Onboarding: How to Find Your First Activation Event

Onboarding completion is an input. Activation is an outcome. Most teams spend quarters polishing the input and wonder why the outcome never moves. Here is the framework for fixing that.

Jake McMahon 24 min read Jake McMahon Published March 29, 2026

TL;DR

  • The Input/Outcome Distinction: Onboarding completion tells you whether users finished your checklist. Activation tells you whether they found real value. These are not the same metric.
  • The Four FAE Criteria: A valid First Activation Event must represent value delivered, happen early, predict retention in cohort data, and be operationalizable.
  • Finding Yours: Export retained vs. churned cohorts, compare their day-7 behavior, and find the event most overrepresented in retained users.
  • FAE by Product Type: Single-player, multi-player, marketplace, and vertical SaaS products each have distinct activation patterns and stealable FAE templates.
  • The Dashboard Problem: New users landing on a random feature screen have no orientation. A purpose-built Home dashboard is one of the highest-ROI activation investments.
  • Experiments That Move the Metric: Progressive disclosure, behavioral triggers (not time-based), and shortening the path to the FAE outperform UI polish consistently.

1. Onboarding Completion Is an Input. Activation Is an Outcome.

Here is a situation that plays out in product reviews every quarter: activation rate is flat, so the team decides to redesign onboarding. They spend six weeks rebuilding the checklist, improving the copy, adding tooltips, and shortening the flow. They ship it. Onboarding completion goes up by 18 percentage points. Activation rate moves by two.

The problem is definitional. The team was measuring — and optimizing — the wrong thing.

Onboarding completion is an input metric. It tells you how many users finished the sequence of steps you designed for them. It says nothing about whether those steps directed users toward the behavior that actually delivers value. A user can click through every tooltip, complete every checklist item, and watch your product walkthrough video from beginning to end — and still churn by day 14, having never done the one thing that makes your product worth keeping.

Activation is an outcome metric. It measures whether a user reached a moment of genuine, felt value. It is the behavioral threshold that separates users who stick from users who leave. The distinction sounds subtle. The downstream implications are not.

Most teams ship a new onboarding. The best teams design for activation.

When you optimize onboarding completion without first knowing your activation event, you are tuning a process that may have low correlation with the outcome you care about. You are making the pipe cleaner while leaving the destination undefined. The users who were going to retain will still retain. The users who were going to churn will now churn slightly faster, having been walked more efficiently toward a dead end.

This is not a problem with execution. It is a problem with the question being asked. The right question is not "how do we get more users to finish onboarding?" It is "what is the specific behavior that predicts retention, and how do we design the entire first-run experience to get users there as fast as possible?"

That specific behavior is your First Activation Event. Everything else is scaffolding.

2. Why Polishing the Onboarding UI Doesn't Move Activation

The standard response to a flat activation rate is a UX audit: make the flow shorter, improve the empty states, add progress indicators, rewrite the welcome email. These are not bad investments. But they consistently underperform against the expected return — and understanding why reveals the structural problem.

The Guided Tour Problem

Most onboarding flows are designed around the product's information architecture, not the user's job-to-be-done. They walk users through the product's features in roughly the order the product team thinks of them. This means the FAE — the action that actually delivers value — might be at step 8 of a 10-step checklist, buried behind profile setup, team invitations, notification preferences, and integration configuration.

A user who drops out at step 4 has not failed at onboarding. You have failed to put value on the path to step 4. Improving the visual design of step 4 does not change this.

Completion Optimizes for Compliance, Not Value

When you optimize for onboarding completion rate, you implicitly optimize for users who are willing to follow instructions. These users will complete your checklist. But compliance and activation are different behavioral signals. A user who ticks boxes because your progress bar is satisfying to fill is not the same as a user who has genuinely connected with what your product does.

According to Userpilot's 2024 Product Metrics Benchmark Report — drawing on data from 62 SaaS companies — the average activation rate across SaaS is 37.5%. That means roughly two thirds of users who sign up do not reach the activation threshold, regardless of how polished the onboarding experience is. The ceiling on UI-driven improvement is lower than most teams assume.

The Cohort Misread

There is a subtler problem: teams often measure onboarding completion against all signups, but the users most likely to complete onboarding are also the users most likely to retain regardless — because they are more motivated, more senior in their organization, or more directly experiencing the pain your product solves. When you improve completion rate by making the flow easier, you capture marginally less-motivated users into your completion cohort. Their retention profile is worse. Your activation-to-retention conversion drops. The metric improves; the outcome does not.

"Aha moments are great for storytelling. But you can't build a roadmap on Aha moments alone. You need one measurable event that represents value delivered, happens early, predicts retention, and can be operationalized."

— Jake McMahon, ProductQuant

This is the core case for the First Activation Event framework. Rather than designing experiences around a qualitative feeling — the "Aha moment" — you need to find a single, measurable behavioral event that you can instrument, target, and run experiments against. The feeling may be real. But you cannot put a feeling in a PostHog funnel.

3. What a First Activation Event Actually Is: The Four Criteria

Not every behavior that correlates with retention qualifies as a First Activation Event. A useful FAE must satisfy all four of the following criteria simultaneously. If it fails on any one of them, it is not yet operationalizable — you may be able to measure it, but you cannot reliably design your product to produce it.

  1. It represents value delivered — not process completed

    The FAE is not "completed profile setup" or "clicked through product tour." It is the first moment the user received something from your product, not the first moment they gave something to it. The behavioral signal is fundamentally different: value-delivery events tend to look like outputs (a report generated, a connection established, a task completed), while process events look like inputs (a field filled, a step acknowledged, a button clicked in sequence).

  2. It happens early — within the first session or first week

    An FAE that reliably predicts 90-day retention but only occurs on day 21 for the median new user is not operationalizable as an activation lever. The purpose of the FAE is to set a design target: get users to this event as fast as possible. If the event itself is structurally delayed, the design intervention becomes "shorten time to day 21," which is a much harder problem than "get this event to happen in session one."

  3. It predicts retention in your cohort data

    This is the empirical test. You must be able to show that users who performed this event in the first 7 days retained at a meaningfully higher rate than users who did not. "Meaningfully higher" is context-dependent — but if the difference is less than 15 percentage points at 90 days, the signal is too weak to build a strategy around. You need a behavioral wedge, not a marginal correlation.

  4. It can be operationalized

    The event must be instrumentable in your analytics stack, reachable through product design decisions, and possible to use as a targeting condition. If users who "had an insightful conversation with your AI" retain better — but you cannot instrument "insightful conversation" — you have identified an activation signal, not an activation event. The FAE must be atomic, measurable, and replicable.

When all four criteria are met, the FAE becomes the organizing principle for your entire new user experience. Every onboarding decision should be evaluated against one question: does this move more users to the FAE, or does it not?

37.5%

Average SaaS activation rate across 62 companies (Userpilot 2024 Benchmark Report). Rates range from 5% in FinTech to 54.8% in AI/ML — a tenfold spread driven largely by product-type and value-delivery architecture, not UI quality.

4. How to Find Your First Activation Event

The methodology for finding your FAE is grounded in behavioral cohort analysis. It does not require a data science team. It requires access to your event data and the ability to segment users by retention outcome. Here is the four-step process.

Step 1: Build Two Cohorts

Export two cohorts of users from your analytics tool (PostHog, Amplitude, Mixpanel — the platform does not matter, the segmentation logic does):

  • Retained cohort: users who were still active 90 days after signup and completed at least 3 sessions after day 7
  • Churned cohort: users who logged in at least once but did not return after day 14

Make these cohorts as clean as possible. Exclude users on enterprise trials that were managed by a sales rep. Exclude users who signed up during a promotional push that attracted low-intent signups. You want users who experienced your product under normal conditions. For statistical reliability, aim for at least 100 users in each cohort.

Step 2: Map First-7-Day Behavior

For each cohort, list every event those users performed in their first 7 days. Do not filter in advance — include everything: page views, feature interactions, settings changes, integration connections, in-app messages sent, reports generated. The goal at this step is completeness, not insight. You are building the raw comparison set.

-- PostHog HogQL: Day-7 event comparison by retention cohort SELECT event, countIf(retained = true) AS retained_users, countIf(retained = false) AS churned_users, round( countIf(retained = true) * 100.0 / (countIf(retained = true) + countIf(retained = false)), 1 ) AS pct_retained_who_fired_event FROM events WHERE timestamp >= signup_date AND timestamp < signup_date + INTERVAL 7 DAY GROUP BY event ORDER BY pct_retained_who_fired_event DESC

Step 3: Run the Correlation

Sort the event list by the ratio of retained users who performed each event versus churned users who performed it. You are looking for events where the retained cohort over-indexes significantly. An event performed by 70% of retained users but only 15% of churned users is a strong candidate. An event performed by 40% of both cohorts is not predictive — it is just common usage.

At this stage you will typically find a cluster of 3-5 candidates. These might be: "created first report," "connected first data source," "sent first message to a contact," "generated first export." The correlation tells you which behaviors retained users share. Your judgment tells you which of those behaviors is plausibly causal rather than just co-occurring with motivation.

Step 4: Validate and Stress-Test

Before committing to an FAE, run three stress tests:

  • The selection bias check: Is this event correlated with users who were more senior, more motivated, or better-fit to begin with? If yes, you may be measuring user quality, not value delivery. Rerun the analysis controlling for ICP-fit signals.
  • The earliness test: What percentage of the retained cohort completed this event in session one? In session two? By day 3? If fewer than 40% completed it in the first week, the event may be a lagging indicator of engagement rather than a trigger of it.
  • The operationalizability test: Can you design a single experiment — a change to your product, onboarding flow, or in-app messaging — that would predictably move more users to this event in their first session? If you cannot describe that experiment clearly, the event is not yet usable as a design target.

Once an event passes all four criteria and all three stress tests, you have your FAE. Instrument it as a milestone event, build a funnel from signup to FAE completion, and treat that funnel as your primary activation diagnostic.

5. FAE Patterns by Product Type

While every FAE must be discovered in your own data, there are structural patterns in what activation looks like across different product architectures. These templates are starting hypotheses — candidates to test first, not conclusions to accept.

Product Type Activation Pattern Stealable FAE Candidates Common Anti-FAE
Single-Player SaaS
One user, self-serve value
Value delivered to the individual within session one. No network dependency. First output created (report, document, analysis, export). First "useful result" from the product's core function. Profile completion. Integration setup. Notification preferences. These precede value but are not value.
Multi-Player SaaS
Collaboration, team tools
Network activation: value emerges when more than one person uses the product together. The individual's FAE often involves inviting or connecting with another user. First colleague invited AND accepted. First comment or collaboration action received. First shared artifact viewed by a second user. Sending an invitation (without acceptance). Viewing the product solo without any collaboration action.
Marketplace SaaS
Two-sided, supply and demand
Activation is side-dependent. Buyers activate on first matched result or first transaction. Sellers activate on first inbound inquiry or first sale. Buyer: first relevant result returned. First booking or purchase completed. Seller: first listing published and viewed. First inquiry received. Account creation with no listing or search. Profile completion without a transaction action.
Vertical SaaS
Industry-specific, workflow-embedded
Activation is tied to the industry's core workflow. Value is often felt when the product replaces a step the user was doing manually or in a different tool. First workflow completed end-to-end in the product. First record processed using the product's core function (e.g., first claim submitted, first patient form completed, first invoice sent). Data import without a subsequent workflow action. Feature exploration without completing a real business task.

The Anti-FAE Problem

The "Anti-FAE" column in the table above deserves elaboration. An Anti-FAE is a step that teams commonly optimize as if it were an activation event, even though it has weak or negative predictive power for retention. The pattern is consistent across product types: Anti-FAEs are almost always inputs the product asks of the user (fill this out, connect this, configure that) rather than outputs the product delivers to the user.

This is why the single most common finding when auditing an onboarding funnel is that the highest-friction step — the one with the biggest dropoff — is also an Anti-FAE: a step that is architecturally required but delivers no value to the user. The design intervention is obvious once you see it: remove the step entirely if possible, defer it until after the FAE if not.

Activation Audit

Not sure which event predicts retention for your product?

We run behavioral cohort analysis against your PostHog or Amplitude data to identify your FAE and map the shortest path from signup to activation.

6. The First Dashboard Problem

There is a structural activation failure that almost every SaaS product makes at some point, and it is not in the onboarding flow. It is in the destination.

A new user completes signup. They land in the product. Where do they end up?

In most products, the answer is: somewhere. A random feature screen. The most recently active view from the last user session (which is meaningless for a new user). An empty list of records they have not yet created. Occasionally, a generic dashboard that was built for power users and is therefore full of zeros, grayed-out modules, and chart types the user does not yet have data to populate.

None of these landing experiences orient the user. They all ask the user to already know what to do.

Most products get the first dashboard wrong. New users need orientation — a purpose-built Home dashboard, not a random feature screen.

A purpose-built Home dashboard for new users is not a product tutorial. It is not a checklist or a progress bar. It is a screen whose explicit job is to answer three questions for a user who has never been in your product before:

  1. Where am I? (What is this product for, and what is it showing me right now?)
  2. What should I do first? (What is the most important action available to me in this moment?)
  3. What will happen when I do it? (What value will I get, and how quickly?)

The correct answer to "what should I do first?" is almost always: the action that leads directly to the FAE. That is the structural connection between the Home dashboard design problem and the activation problem. The Home screen is the primary surface through which new users encounter the path to activation. If that surface does not point directly at the FAE, you are relying on users to find the value path by exploring an unfamiliar environment — which most will not do.

What a New-User Home Dashboard Should Contain

The answer depends on your FAE, but the structure is consistent across product types:

  • A single primary CTA pointing at the FAE, positioned in the top-left (F-pattern reading) or center-top of the screen. Not three CTAs. One.
  • Context about what the product is doing for the user right now — even if "right now" means showing default content, an example, or a prompt to provide the data the product needs.
  • No empty states without guidance. An empty chart is not a useful experience. An empty chart with a sentence explaining what it will show, and a link to the action that will populate it, is.
  • Progressive suppression of advanced features. The Home dashboard for a new user should show fewer modules than the Home dashboard for a user who has been active for 30 days. Exposing the full product surface on day one creates cognitive overload, not opportunity.

This is the operational connection between "Aha moment" thinking and FAE thinking. The Aha moment is the experience of value. The FAE is the measurable proxy for it. The Home dashboard is the primary design surface where you can accelerate the path between them. See also: PLG Onboarding Checklist for a complete audit of new-user first-run surfaces.

7. The Experiments That Actually Move Activation

Once you have identified your FAE and instrumented the signup-to-FAE funnel, the question becomes: which experiments have the highest expected return? The answer is not the same as the question "which experiments are easiest to run." Some of the highest-impact activation interventions require meaningful product work. Some of the easiest-to-run experiments (email copy changes, tooltip rewrites) consistently produce the smallest effects.

High-Impact Experiments

Shorten the path to the FAE. Map every step between signup and FAE completion. Identify steps that are required by the product but deliver no value to the user. Remove them if possible. Defer them to after the FAE if not. The goal is to reduce the number of decisions a new user must make before they get value. Each removed decision is a removal of a potential drop-off point.

Replace time-based triggers with behavioral triggers. Most onboarding email sequences are time-based: Day 1 welcome, Day 3 tip, Day 7 check-in. These sequences are optimized for average behavior, which means they are suboptimal for almost every individual user. A user who reached the FAE in session one does not need a Day 3 nudge. A user who has not logged in since signup needs a different message on Day 3 than one who has logged in twice. Replace time-based triggers with behavioral ones: send a message when the user last action was X days ago AND they have not yet completed the FAE.

Progressive disclosure. New users do not need to see all of your product's capabilities. They need to see the path to value. Suppressing advanced features — showing them only once the user has completed the FAE — reduces the cognitive surface area of the first-run experience and increases the proportion of attention that lands on the primary activation path. This is one of the most consistently effective activation interventions across product types.

In-app contextual prompts at the moment of highest confusion. Using session replay data or heatmaps, identify the points in the pre-FAE flow where users pause longest, click most erratically, or exit. These are orientation failures — moments where the user has lost the thread. A targeted in-app prompt at the point of confusion (not a generic tooltip tour) can recover users who would otherwise drop.

40%

Higher 30-day retention for products that deliver the activation moment within the first 5 minutes compared to products requiring 15+ minutes — a finding that underscores path-to-FAE length as a primary retention lever. (Source: product analytics benchmark research cited by multiple activation practitioners)

Lower-Impact Experiments (But Still Worth Running)

Onboarding copy rewrites. Clearer language about what a step achieves (not just what it is) improves completion. But copy operates at the margins — it helps compliant users understand faster; it rarely converts non-compliant users into compliant ones.

Social proof in the activation funnel. Showing "8,400 teams have connected their first data source" near a high-friction step provides normative pressure. Effect sizes are typically small but consistent.

Pre-filled templates and sample data. For products where the FAE involves creating something (a report, a workflow, a project), shipping with a pre-populated example lowers the blank-page barrier. Users can see what good looks like before they build their own version. This works especially well in single-player SaaS where the FAE is output creation.

Experiments to Stop Running

Adding steps to onboarding to "educate" users. Every step added to the pre-FAE flow is a potential exit. If users are leaving before reaching the FAE, the answer is rarely "add more context" — it is "remove more obstacles."

Gamification of onboarding completion. Progress bars, confetti, completion badges. These optimize for compliance, not value. Users who complete the checklist to earn the badge do not have the same behavioral profile as users who completed it because each step moved them toward something real.

For a full experimental framework, see The First 10 A/B Tests Every SaaS Team Should Run and Behavioural Psychology in Product Onboarding.

8. FAQ

What is a First Activation Event in SaaS?

A First Activation Event is the specific user behavior — a single, measurable action — that most reliably predicts whether a new user will become a retained customer. It must satisfy four criteria: represents value delivered (not just process completed), happens early (within the first session or first week), predicts long-term retention in your behavioral cohort data, and can be operationalized into product experiments and success metrics. It is distinct from the "Aha moment" — which describes a qualitative experience — because it is instrumentable, targetable, and experimentable.

What is the average SaaS activation rate?

According to Userpilot's 2024 Product Metrics Benchmark Report (data from 62 SaaS companies), the average user activation rate across SaaS is 37.5%. Rates vary significantly by industry: AI and Machine Learning products average 54.8%, CRM tools average 42.6%, and FinTech products average as low as 5.0%. The spread is driven primarily by how quickly a product type can deliver observable value in a new user session — not by UI quality or onboarding flow design.

What is the difference between onboarding completion and activation?

Onboarding completion measures whether a user finished a guided sequence of steps — it is a process input. Activation measures whether a user reached the moment of genuine value delivery — it is an outcome. A user can complete 100% of an onboarding checklist and still churn if that checklist did not direct them to the behavior that actually predicts retention. Optimizing onboarding completion without first identifying your activation event means optimizing an input that may have low correlation with the output that determines revenue.

Can a product have more than one First Activation Event?

Different user segments or personas within the same product can have different FAEs — and this is worth investigating. But for each segment, you should commit to one primary FAE for design and experiment targeting purposes. Multiple FAEs across the same funnel create measurement ambiguity: when activation improves, you cannot attribute it to a specific behavioral change. Start with one FAE per segment, validate it in data, and add segment-specific FAEs only when your instrumentation and team bandwidth support that complexity. See also: Persona Alignment Audit.

What should I do if I can't find a clear behavioral signal in my cohort data?

Three possible explanations: your analytics instrumentation is incomplete (events are missing or mislabeled), your retained and churned cohorts are too small for statistical reliability, or your product genuinely delivers value through a behavior you have not yet thought to track. Start with the instrumentation audit — Product Analytics Implementation Checklist provides a systematic framework. If instrumentation is complete and cohorts are large enough, the answer is usually the third option: run a qualitative session with 10-15 recently retained users and ask them to walk you through what they did in their first session. The behavior they describe is your FAE candidate to instrument and validate.

Sources

Define activation for your product

We map your first key moment and instrument the entire funnel.

See Activation Deep-Dive Sprint →
Jake McMahon — ProductQuant

About the Author

Jake McMahon is a Product-Led Growth consultant with 8+ years in B2B SaaS. He holds a Master's in Behavioural Psychology and Big Data, and specializes in activation architecture, product analytics, and building growth operating systems for Series A–C companies. He has audited the activation funnels of healthcare, fintech, and vertical SaaS platforms across Australia, Europe, and MENA.