TL;DR

  • The Sean Ellis test asks one question: "How would you feel if you could no longer use this product?" with three options: "Very disappointed," "Somewhat disappointed," "Not disappointed." If 40%+ of active users say "very disappointed," it's a positive PMF indicator.
  • The test is useful as one signal among many. It's simple, scalable, and correlates with retention. But it measures declared preference (what users say) not revealed behavior (what users do). Users who say they'd be very disappointed still churn.
  • The 40% threshold is a heuristic, not a law. 39% doesn't mean "no PMF" and 41% doesn't mean "PMF." The threshold is a guideline that works across many products, but your specific segment may have a different threshold.
  • Who you survey matters more than the question itself. Survey active users (people who used the product in the last 30 days). Don't survey churned users (they've already decided). Don't survey trial users who never activated (they never experienced the value).
  • Segment the results. A 40% "very disappointed" rate across all users could mean 80% for your ICP and 10% for everyone else. The average hides the signal. Segment by plan, company size, signup source, and role.
  • Combine with retention curves. Trust the revealed behavior. The Sean Ellis test tells you what users say. Retention curves tell you what they do. When both signals align, you have confidence. When they diverge, trust the retention curve.

The Survey: One Question, Three Answers

The survey is deceptively simple:

Question: "How would you feel if you could no longer use [product]?"

Options: Very disappointed · Somewhat disappointed · Not disappointed

The rule: If 40%+ of respondents select "Very disappointed," you have a positive PMF indicator. This threshold was derived from Sean Ellis's analysis of hundreds of startups — companies that went on to scale successfully had 40%+ of active users saying "very disappointed," while companies that struggled had less.

That's it. One question. Three options. One threshold. The simplicity is its strength — you can deploy it in minutes with any survey tool and get a result the same day.

For the complete 4-signal PMF framework that puts this test in context, see our full PMF guide.

How to Run the Survey

Step-by-step process for running a PMF survey
The 4-step process for running a statistically significant PMF survey.

Step 1: Define Your Population

Who to survey: Active users — people who have used your product in the last 30 days. Specifically, users who have completed at least one core workflow and have had meaningful engagement with the product.

Who NOT to survey:

  • Churned users (they've already decided your product isn't for them)
  • Trial users who never activated (they never experienced the value)
  • Users who signed up more than 90 days ago and haven't used the product in 60 (they've silently churned)
  • Internal team members and beta testers (their context is different from real users)

Sample size: Aim for 40 to 100 responses minimum. Below 40, a few outliers can swing your PMF score by 10+ percentage points. If your user base is small, survey everyone who qualifies. For larger bases, a random sample of active users works as long as you hit the minimum.

Step 2: Choose Your Survey Tool

Any survey tool works: Typeform, SurveyMonkey, PostHog Surveys, Intercom, or a simple email with reply options. The tool doesn't matter. The population and timing do. What matters is that the survey reaches active users in-context, not buried in a quarterly NPS survey.

Step 3: Set the Timing

Send the survey at a moment when the user has recently experienced value:

  • After they complete a core workflow
  • After they generate a report
  • After they've used the product for 14+ days

Don't send it: Immediately after signup (they haven't experienced the value). After an error or outage (they're frustrated). During onboarding (they're still learning).

Step 4: Segment the Results

This is the step most teams skip. Break down the results by:

  • Plan type: Do paid users say "very disappointed" at a higher rate than free users? (They should.)
  • Company size: Do enterprise users say it at a higher rate than SMB users? (This tells you which segment has PMF.)
  • Signup source: Do organic signups say it at a higher rate than paid ad signups? (If yes, your ICP is organic, not paid.)
  • Tenure: Do 6-month users say it at a higher rate than 1-month users? (If yes, the value compounds over time.)
  • Role: Your product might have strong fit with product managers but weak fit with engineers. Segment your PMF score by role to find your true audience.

Segmentation is critical. Do not just calculate one overall PMF score. You may discover that your overall PMF score is 30%, but it's 55% among product managers at mid-market SaaS companies. That segment is your ICP.

Step 5: Compare Over Time

Run the survey quarterly, not once. The trend matters more than the single number:

  • 25% → 35% → 42%: You're approaching PMF. Keep building.
  • 45% → 40% → 35%: You're losing PMF. Something changed.
  • 38% → 42% → 41%: You have PMF. It's stable.

A company at 35% that was at 25% 6 months ago is healthier than a company at 42% that was at 50% 6 months ago. The trend tells you whether you're building something customers increasingly need.

What the 40% Threshold Actually Means

The 40% rule comes from Sean Ellis's analysis of hundreds of startups. He found that companies that went on to scale successfully had 40%+ of active users saying "very disappointed." Companies that struggled had less than 40%.

It's a correlation, not a causation. 40% doesn't cause growth. It correlates with the underlying pattern of users who find your product essential.

The threshold is a heuristic. 39% doesn't mean "no PMF" and 41% doesn't mean "PMF." The 40% threshold works across many products as a rough guide. Your specific segment may have a different threshold.

The Sean Ellis question measures dependency, not satisfaction. That distinction matters. Satisfied users may still leave if a better option appears. Dependent users cannot, because your product has become essential to how they work. This is why the question asks about loss ("if you could no longer use") rather than preference ("do you like"). Loss aversion is a stronger emotional signal than preference, and it produces more honest answers.

42%

of startups fail because they build something nobody wants, according to CB Insights research. The product-market fit survey is the simplest way to find out before it's too late. If 40%+ of active users say "very disappointed," you have product-market fit. Below that threshold, you are building on hope instead of evidence.

Why the Sean Ellis Test Is Not Enough

Critical limitations of the Sean Ellis test
Critical Constraints: Why the Sean Ellis test cannot be your only PMF signal.

The Sean Ellis test is Signal 2 of 4 in the complete PMF framework. Here's why it can't stand alone:

It Measures Declared Preference, Not Revealed Behavior

Users say they'd be very disappointed. Then their renewal comes up and they churn. The gap between what users say and what they do is the PMF gap.

The fix: Combine the survey with retention data. If 40% say "very disappointed" AND your retention curve is flat, you have PMF. If 40% say "very disappointed" but your retention curve decays to zero, users are lying to you (or to themselves).

It Averages Across Segments

Your overall rate is 38%. Sounds like you're close to PMF. But Segment A is at 72% and Segment B is at 12%. You have strong PMF for Segment A and no PMF for Segment B. The average tells you nothing.

The fix: Segment everything. Run the survey separately for each meaningful segment. Act on the segment-level data, not the aggregate. For how to distinguish real PMF signs from false signals, see our signs guide.

It's Static

One survey tells you the current state. It doesn't tell you whether you're improving or declining. A single data point is a snapshot; a trend is a story.

The fix: Run the survey quarterly. Track the trend. A product moving from 25% to 35% to 42% is healthier than a product stuck at 40% for four quarters.

The 4-Signal Framework

The Sean Ellis test is one signal of four. Here's the complete framework:

SignalWhat It MeasuresHow to MeasureThreshold
1. Retention Revealed behavior Cohort retention curves Flat at 20%+
2. Sean Ellis Declared preference "Very disappointed" rate 40%+
3. Expansion Financial commitment NRR 110%+
4. Pricing Value confidence Churn rate after price increase <5%

When all 4 signals align, you have PMF. When 2 or fewer align, you don't — regardless of what the Sean Ellis test says. For the complete PMF framework with all 4 signals, see our full guide.

Beyond the Core Question: Follow-Up Questions That Reveal More

The Sean Ellis question tells you whether you have PMF. The questions below tell you why, for whom, and what to do next. Each question reveals a different dimension of your product-market fit.

Value Proposition Questions

"What is the primary benefit you get from [product]?" This is the most important open-ended follow-up. The words your "very disappointed" users choose to describe your benefit should become your marketing copy. If they consistently say "saves me 3 hours a week on reporting," that is your value proposition.

"What would you use instead if [product] did not exist?" The most direct competitive intelligence question. Pay close attention to "Nothing" or "I would go back to doing it manually." These responses indicate you have created a new category.

Target User Questions

"What is your role?" PMF often varies dramatically by role. Your product might have strong fit with product managers but weak fit with engineers. Segment your PMF score by role to find your true audience.

"How long have you been using [product]?" PMF typically strengthens with usage duration. If it doesn't, your product may have an engagement ceiling. Track how the PMF score changes across tenure cohorts.

Improvement Questions

"What is the main thing you would improve about [product]?" The "main thing" constraint forces prioritization. General "what would you improve?" questions produce scattered wish lists. This question produces a ranked priority list when you aggregate responses.

"What does [product] do better than the alternative?" Your competitive advantages as perceived by users, not as assumed by your team. These are the reasons people switched to you and the reasons they stay. Protect these advantages aggressively.

FAQ

How many responses do I need?

Minimum 30 responses per segment. Below 30, the margin of error is too large to trust the 40% threshold. If you have fewer than 30 active users in a segment, you need more users before the survey is meaningful. Aim for 40–100 total responses for a reliable overall score.

Should I ask follow-up questions?

Yes. After the main question, ask: "What would you use as an alternative?" This reveals your competitive landscape from the customer's perspective. If they say "nothing" or "spreadsheet," you have PMF. If they name 3 competitors, you're one of many options. Add "What is the primary benefit you get?" to capture language for your positioning.

How often should I run the survey?

Quarterly. Monthly is too frequent (results don't change that fast). Annually is too infrequent (you'll miss trends). Quarterly gives you 4 data points per year — enough to see a trend without survey fatigue.

Can I use the Sean Ellis test for a new product with no users?

No. You need at least 30 active users per segment to get a meaningful result. Before that, use qualitative interviews (switch interviews) to understand the job your product serves.

What if my Sean Ellis score is high but retention is low?

This is the PMF gap — users say they'd be disappointed but their behavior says otherwise. Trust the retention curve. The survey measures declared preference; retention measures revealed behavior. When they diverge, behavior wins. Focus on understanding why users who claim dependency still leave.

Sources

Jake McMahon

About the Author

Jake McMahon builds growth infrastructure for B2B SaaS companies — analytics, experimentation, and predictive modeling that turns product data into revenue decisions. He has combined Sean Ellis surveys with retention analysis across multiple engagements to build comprehensive PMF evidence briefs for investor conversations. Book a diagnostic call to discuss your PMF trajectory.

Next Step

Get the PMF Validation Program

We build your PMF evidence brief from your own data: Sean Ellis surveys, retention cohorts, JTBD documentation, and expansion signals. Structured for investor conversations.