TL;DR

  • Most activation metrics are completion events, not value events — and completion does not predict retention. A user who finishes your onboarding checklist without experiencing the core product value will churn at the same rate as a user who skipped it entirely.
  • The Key Value Moment (KVM) is the specific behavioral event where a user first experiences your product's core value in a context that matches their job-to-be-done. This is the only event worth calling "activated."
  • Average B2B SaaS activation sits at 37.5% (Userpilot 2025), but the range runs from 5% in FinTech to 54.8% in AI/ML tools. The gap is almost entirely explained by how well the team has identified their KVM.
  • A 25% improvement in activation drives a 34% MRR increase within 12 months. No other single product lever produces that return at equivalent investment.
  • Identifying the correct KVM requires behavioral cohort analysis — not stakeholder opinion, not copy from the sales deck, not whatever the CEO thinks the Aha Moment is. The data tells you. You have to ask it.

Why Your Activation Rate Is a Vanity Metric

Most product teams can tell you their activation rate within seconds. Very few can tell you what that number actually measures.

Ask them what counts as "activated" and you get answers like: completed the onboarding checklist, uploaded their first file, invited a teammate, or connected an integration. These are reasonable-sounding events. They are also almost certainly the wrong events to measure.

The defining flaw in how most teams track activation: they measure completion of a process designed by the product team, not arrival at value experienced by the user.

The onboarding checklist was built to make users take actions the team wanted them to take. It was not built around the moment users stop evaluating the product and start depending on it. Those are different things. Sometimes they overlap. Often they do not.

An activation metric that does not predict retention is not a product metric. It is a UX metric dressed up as a growth metric.

Here is what exposes the problem: if your activation rate is 37.5% and your 30-day retention rate is not significantly higher for activated users than unactivated users, your activation definition is wrong. Full stop.

The activation metric should be the single strongest predictor of whether a user will still be paying in 90 days. If it is not, something else in the product is doing that predictive work — and you are measuring the wrong thing.

The Checklist Trap

The onboarding checklist became standard practice because it works for guiding new users through a product. It does not work as a proxy for value delivery.

A user can complete every item on your onboarding checklist — upload a file, set up their profile, connect their CRM, invite a colleague — and still have no idea what problem your product solves for them specifically. They completed a process. They did not experience value.

Completing an onboarding checklist is evidence that a user followed instructions. It is not evidence that the product solved anything for them.

This distinction sounds obvious when stated directly. It is systematically ignored in practice because checklists are easy to instrument, easy to track, and produce activation numbers that look like progress.

The Feature Milestone Confusion

The second common mistake is treating a feature milestone as activation. "Created their first report." "Generated their first export." "Ran their first analysis."

Feature milestones are better than checklist completion — they at least involve the product doing something. But they still miss the critical question: did the user experience value from that action in a context that matches why they bought the product?

A user who creates a report that no one reads has not activated. A user who creates a report that changes a business decision has activated.

The difference between a feature milestone and a Key Value Moment is whether value was experienced, not whether an action was performed.

That distinction is harder to instrument. It is also the only distinction that matters for predicting retention.

The Cost of Getting This Wrong

When activation is defined incorrectly, everything downstream is corrupted.

Your activation rate looks acceptable. Retention is underperforming. You spend time optimizing onboarding flows that do not change the fundamental problem. You A/B test copy changes on checklist items. You debate whether to add or remove a step. None of it moves retention because the problem is not the process — it is the definition of what you are trying to achieve.

The gap between a 37.5% average activation rate and the 54.8% rate seen in AI/ML tools is not explained by better onboarding checklists. It is explained by better product-value alignment and clearer KVM identification.

The teams performing at the top of the benchmark curve have, almost without exception, done the work to identify their actual Key Value Moment. The teams in the middle and bottom have not.

The KVM Framework: Defining the Right Activation Event

The Key Value Moment (KVM) is a specific, observable behavioral event that satisfies three conditions simultaneously. Get all three right and you have your activation metric. Miss any one and you are back to measuring the wrong thing.

Condition 1: Core Value Delivery

The event must represent the core value of your product — not a peripheral feature, not a setup step, not an admin action.

Core value is the thing your product does that no other combination of tools does as well. For a project management tool, that is probably not "created a project." It might be "completed a task that was blocked on a dependency — automatically notified and resolved." For a sales intelligence tool, that is not "uploaded their CRM contacts." It is probably "identified a contact previously unknown to the team and received an engagement signal within 24 hours."

If you cannot name the core value in one sentence without using the word "management," "analytics," or "platform," you have not found it yet.

Condition 2: Job-to-Be-Done Match

The event must occur in a context that matches the user's actual job-to-be-done — not a demo context, not a tutorial workflow, not a sandboxed environment.

This is where ICP segmentation matters enormously. A CFO using your expense software has a different job-to-be-done than an employee submitting expenses. The same product action — "submitted an expense report" — is not the KVM for both users. For the employee, maybe. For the CFO, the KVM is probably something like "reviewed and approved a batch of expense reports with one policy exception flagged automatically."

A single product can have multiple KVMs for different user segments. The mistake is assuming one universal activation event applies to all users.

The KVM is not what the user does in your product. It is what they accomplish in their job because of your product.

Condition 3: Predictive Validity

The event must demonstrably predict 30-day retention at a meaningfully higher rate than non-occurrence of the event. This is not optional. This is the test.

If users who hit your proposed KVM retain at 72% and users who do not retain at 68%, you have a weak signal. If the gap is 72% vs 34%, you have your KVM.

Predictive validity is what separates the KVM from correlation-as-coincidence. Every behavioral event correlates with retention at some level because active users do more things and are less likely to churn. The KVM is the event whose occurrence is strongly causally linked to value delivery — and whose absence predicts churn even for otherwise active users.

KVM vs. Aha Moment: A Critical Distinction

The Aha Moment concept — popularized by growth teams at Facebook, Slack, and Twitter — is often confused with the KVM. They are related but different.

Concept Definition Where it lives How to find it
Aha Moment The moment a user realizes your product is valuable — a psychological event Inside the user's head User research, qualitative interviews, session recordings
Key Value Moment (KVM) The observable behavioral event that correlates with the Aha Moment — a measurable event In your product analytics Cohort analysis: which events separate retained users from churned users

The Aha Moment is what you are trying to engineer. The KVM is how you measure whether you succeeded. You need both — the qualitative understanding of what "value felt like" and the quantitative behavioral proxy that tells you when it happened at scale.

The Aha Moment without the KVM is a design principle. The KVM without the Aha Moment is a metric without a theory. You need the theory to find the metric and the metric to track the theory at scale.

"Activation is the moment a user first experiences the core value of the product. Not the moment they complete setup. Not the moment they log in for the first time. The moment they experience what the product actually does for them."

— Reforge, "Activation Is the Most Misunderstood Metric in Product Growth"

How to Find Your KVM: The Behavioral Analysis Method

There is one reliable method for identifying the correct KVM: behavioral cohort analysis comparing users who retained against users who churned. Everything else — stakeholder opinion, analogy to other products, copying a competitor's onboarding flow — is guesswork.

The analysis follows four steps.

Step 1: Define your retention baseline. Choose a retention window that is meaningful for your product. For a monthly subscription B2B tool, 30-day and 90-day retention are standard. Pull two cohorts: users who retained through that window and users who did not.

Step 2: Extract all behavioral events for both cohorts in the first 7 days. Every event. Every feature touch. Every workflow. This is your raw signal pool. You are looking for events that are significantly more common in the retained cohort than the churned cohort — not just correlation, but meaningful separation.

Step 3: Test for predictive validity. For your top 510 candidate events, calculate retention rates by occurrence vs. non-occurrence. The strongest predictor with a clear theoretical link to core value is your KVM candidate.

Step 4: Validate with qualitative research. Interview 812 retained users. Ask them when they first felt the product was worth keeping. Map their answers to your KVM candidate. If the stories align — if users describe an experience that corresponds to your identified behavioral event — you have your KVM.

The KVM is never obvious before you run the analysis. Teams that skip the analysis and go with their best guess are measuring the wrong thing with high confidence.

The Measurement System: What to Track and What to Ignore

Once you have identified your KVM, the measurement system has three components.

Component 1: KVM Conversion Rate. The percentage of new users who reach the KVM within a defined window. The window matters — average time-to-value for B2B SaaS is 1 day 12 hours overall, but HR software averages 3 days 19 hours (Userpilot 2025). Your window should be calibrated to your product's complexity and typical workflow, not copied from a benchmark.

Component 2: Time-to-KVM. The median time from first login to KVM occurrence, segmented by acquisition source, user role, and company size. This tells you where friction lives — not just that friction exists, but which user segments are most affected and at what stage.

Component 3: KVM-to-Retention correlation. Updated monthly. Track whether the KVM's predictive power is holding as the user base evolves. Products change, user expectations change, and the event that predicted retention in Q1 may need refinement by Q3.

37.5%

Average B2B SaaS activation rate across 547 companies (Userpilot 2025). The range is 5% (FinTech) to 54.8% (AI/ML tools). That 50-point spread is almost entirely explained by KVM identification quality — not product quality.

What to ignore: everything that does not connect to value delivery. Session count. Page views. Feature breadth in the abstract. These are engagement metrics, not activation metrics. Activation is a binary: the user has experienced the core value of the product, or they have not.

DISCOVER Workshop

Find Your Product's Key Value Moment in One Day

The DISCOVER Workshop is a $2,500 one-time diagnostic that identifies your KVM through behavioral cohort analysis, maps the friction between first login and value delivery, and produces a prioritized intervention plan. Most teams leave with a redefined activation metric and a 90-day roadmap to move it.

What the Data Actually Shows About Activation

The activation benchmark data from Userpilot's 2025 study of 547 B2B SaaS companies is the most comprehensive public dataset on activation rates by vertical. The numbers tell a clear story.

The Benchmark Gap Is Not Random

AI/ML tools achieve 54.8% activation. FinTech products hit 5%. HR software averages 8.3%. That is not a product complexity story — FinTech and HR products are not inherently harder to use than AI tools.

It is a KVM identification story.

AI/ML tools by design have a very specific, demonstrable moment of value delivery: the first time the model produces an output the user could not have produced manually. That event is concrete, observable, and emotionally significant. Teams building AI tools tend to engineer directly toward that moment.

HR software, by contrast, tends to be built around process compliance — attendance tracking, payroll processing, benefits administration. The value is diffuse and organizational rather than immediate and individual. Teams building HR software rarely ask "when does a user first experience that this product is worth keeping?" They ask "when has the user completed setup?"

The 8.3% HR activation average is not evidence that HR software users are hard to activate. It is evidence that most HR SaaS teams have not identified a working KVM.

The MRR Consequence

A 25% improvement in activation rate drives a 34% MRR increase within 12 months, per Userpilot's Fairmarkit case study. That is the financial return on KVM identification at scale.

The mechanism is not complicated: higher activation means more users experience core value in the trial or early subscription period, means more users convert from trial to paid, means more users retain beyond month 3, means less replacement CAC required to hold ARR flat.

At $5M ARR, a 25% activation improvement compounds through reduced churn, higher expansion revenue, and improved trial-to-paid conversion. The 34% MRR figure is conservative for products where activation is currently measured incorrectly — because incorrectly-measured activation tends to cluster around 2540% regardless of the actual rate, masking far more improvement potential.

The Time-to-Value Problem

Average time-to-value for B2B SaaS is 1 day 12 hours. HR software: 3 days 19 hours. This is a measurement proxy — the time between first login and the first occurrence of whatever the team has defined as the activation event.

The problem with time-to-value as a metric is that it inherits whatever is wrong with the activation definition. If activation is defined as "completed onboarding checklist," time-to-value is the time to finish the checklist. That number can be optimized by making the checklist shorter. It does not tell you anything about whether users experienced value faster.

Reducing time-to-checklist-completion is not the same as reducing time-to-value. Optimizing the wrong metric produces the wrong interventions.

+34%

MRR increase within 12 months from a 25% improvement in activation rate (Userpilot, Fairmarkit case study). No other single product lever produces this return at equivalent investment and timeline.

DISCOVER Workshop

What Does Your Actual Activation Rate Look Like?

Most teams do not know their real activation rate — they know their checklist completion rate. The DISCOVER Workshop identifies the gap, redefines the metric, and produces the behavioral analysis that shows where users are dropping off between first login and core value. One day. $2,500.

Common Mistakes Teams Make After Reading This

Mistake 1: Asking Stakeholders to Define the KVM

The most common response to "we need to redefine our activation metric" is to schedule a meeting where product, marketing, and sales debate what the activation event should be. This produces the consensus activation event — the event that everyone can agree sounds right. It does not produce the correct activation event.

The KVM is found in data, not in meetings. Stakeholders have opinions about what the product is supposed to do. Users have data about what the product actually does for them.

The correct activation event will sometimes surprise you. If it does not surprise at least someone in the room, you probably did not run the analysis rigorously enough.

Mistake 2: Using an Overly Broad Definition

"Activated" should not mean "logged in more than once." Or "used the product for more than 5 minutes." Or "returned after the first session." These definitions conflate engagement with activation. Active users are not the same as activated users.

An activated user has experienced the specific core value of the product. An active user has simply returned. The distinction is everything when it comes to retention prediction.

Broad definitions produce high activation rates and explain nothing about why some users retain and others do not.

Mistake 3: Ignoring Segment Differences

A single activation event defined across all users obscures the most important signal: which user segments are activating and which are not.

If your overall activation rate is 37% but your enterprise segment activates at 68% and your SMB segment at 14%, you do not have an activation problem. You have an SMB onboarding problem. The intervention is completely different.

Always segment your KVM conversion rate by company size, acquisition channel, user role, and time-to-first-login. The aggregate number hides the actionable signal.

Mistake 4: Tracking the Wrong Event With High Conviction

The worst version of this problem is not ignorance — it is false certainty. Teams that have a clear, well-instrumented activation event that everyone believes in, but which does not actually predict retention.

If you have an activation metric and you have not validated its predictive relationship to 30-day and 90-day retention in the last 6 months, you do not know if it is working. Run the validation. It takes 23 hours with any product analytics tool. The cost of not running it is months of optimizing the wrong thing.

Mistake 5: Optimizing the Onboarding Flow Before Fixing the Definition

Onboarding optimization is the right intervention — after you have the right activation definition.

Running onboarding experiments against a broken activation metric produces confident conclusions about what does and does not work, all of which are based on the wrong signal. The experiment shows that a shorter checklist increases checklist completion. It tells you nothing about whether value delivery improved.

Fix the definition first. Then the onboarding experiments have something real to measure against.

FAQ

What is the difference between an activation metric and a retention metric?

Retention measures whether a user stays. Activation measures whether a user reached the point where staying was likely — the Key Value Moment. Activation is a leading indicator of retention. The relationship should be measurable: users who activate should retain at significantly higher rates than users who do not. If that relationship does not hold for your current activation definition, the definition is wrong.

How long does it take to identify the correct KVM?

With adequate behavioral data — at least 36 months of event history and at least 200 churned and 200 retained users in comparable cohorts — the quantitative analysis takes 23 days. Validation interviews add another week. Most teams do not have adequate data the first time they try this — which is itself diagnostic information about their analytics setup.

Can a product have multiple KVMs?

Yes, and most B2B products with multiple user roles should expect this. The KVM for an admin user setting up permissions is not the same as the KVM for a frontline user completing daily work. Define KVMs at the segment level — by role, by company size, by use case — not at the aggregate product level. Track each separately. Aggregate activation rate is a summary; segmented KVM conversion rates are the actionable inputs.

What if my product does not have enough users to run cohort analysis?

Under 100 retained users and 100 churned users, quantitative cohort analysis produces unreliable results. In that case, qualitative research dominates: interview every churned user you can reach and every retained user willing to talk. Ask directly when they first felt the product was worth keeping. Map the answers to behavioral events. You are building a hypothesis, not validating one — and that is appropriate for your stage.

How often should we revalidate our KVM?

At minimum, every 6 months — or whenever there is a significant product change, a new acquisition channel, or a new pricing model. The KVM is not a permanent feature of a product; it is a hypothesis about where value currently lives. Products evolve. User expectations evolve. The analysis that produced a strong KVM in year one may need to be rerun entirely by year three.

Is the DISCOVER Workshop right for early-stage products?

The workshop is best suited to products with at least 6 months of user data and enough volume for cohort analysis to be reliable. Earlier-stage products benefit more from structured qualitative research — which is a different engagement format. If you are unsure, the right first step is a 30-minute conversation about your current data state before committing to either format.

Sources

Jake McMahon, founder of ProductQuant

About the Author

Jake McMahon is the founder of ProductQuant, based in Tbilisi, Georgia. He works with mid-market B2B SaaS teams on activation architecture, product analytics design, and PLG diagnostics. He holds a Master's in Behavioural Psychology and Big Data, and brings 8+ years of B2B SaaS experience to the question of why users do and do not find value in software products. His work focuses on the operational side of product growth — not the theoretical, but the measurable.

DISCOVER Workshop

Redefine Your Activation Metric in One Day

The DISCOVER Workshop is a $2,500 one-time diagnostic for B2B SaaS product teams. We identify your Key Value Moment through behavioral cohort analysis, map the gap between first login and value delivery, and produce a prioritized 90-day intervention plan. The output is a redefined activation metric you can trust — and a roadmap to move it.