TL;DR
- Claiming PMF and evidencing it are structurally different activities. Investors at Series A are looking for specificity that can be checked — not assertions that cannot be falsified.
- A PMF Evidence Brief has five components: segment definition, retention evidence, jobs-to-be-done (JTBD) documentation, competitive differentiation evidence, and expansion signal.
- Imperfect data is acceptable if you annotate it honestly. A cohort of 20–30 ideal customer profile (ICP)-fit customers is more useful than an aggregate number from a mixed base.
- The process of building the brief surfaces gaps — research you have not done, cohorts you have not built — which is itself a forcing function for the product and research work that comes next.
The difference between claiming PMF and evidencing it
Most founding teams believe they have PMF. The belief is usually sincere. It is based on customers who are enthusiastic, a growing NPS score, a sales cycle that feels shorter than it used to, and a set of anecdotes from customer calls that confirm the value proposition is landing. None of this is fabricated. All of it is real. And none of it is PMF evidence.
Claiming PMF means citing signals that are consistent with fit but do not prove it. An NPS of 55 is consistent with PMF. So is 25% annual growth. So is "our customers love us." These claims are unfalsifiable — there is no data that could disprove them, because they are not specific enough to be wrong.
Evidencing PMF means showing data that is specific enough to be checked. Which segment retains at what rate? What is the distribution of jobs-to-be-done across your coded call corpus? Where do you win and lose competitively, evidenced from deal data rather than team recall? Evidence has a defined scope. It names the segment. It shows the cohort. It reports the frequency count, not the anecdote.
The difference matters because investors who fund on claims — rather than evidence — are making a bet on the team's self-assessment rather than on observable data. The ones who have been doing this longest have learned that self-assessment is the least reliable input in the diligence process, and they structure their conversations accordingly.
A complete PMF Evidence Brief has five components: segment definition, retention evidence, JTBD documentation, competitive differentiation evidence, and expansion signal. Each is specific enough to be checked by an investor during due diligence.
What investors are actually evaluating
PMF is not a binary state. It does not switch from absent to present at a specific revenue threshold or NPS score. It is a spectrum, and investors at different stages are evaluating different positions on that spectrum.
At Series A, the evaluation typically focuses on three things:
- A cohort that retains at a threshold you can build a company on. The specific threshold varies by investor and category, but the principle is consistent: is there a segment of customers that stays long enough for the unit economics to work? Not the average customer — the best cohort.
- A segment definition narrow enough to be actionable for the next 18 months. A definition that encompasses "mid-market B2B SaaS" is not actionable. A definition that describes which mid-market SaaS companies, at which stage, with which team structure, solving which specific operational problem, is. Narrow enough means narrow enough to build a targeted acquisition motion around.
- Evidence that the value proposition is defensible. Not that you have features — every product has features. That customers chose you over specific alternatives for reasons that are replicable, and that those reasons are not trivially copyable by a well-funded competitor.
What investors do not expect at Series A is perfection. They expect honesty about where fit is strong and where it is weak. A brief that says "our ICP cohort retains at X% but our non-ICP customers churn at twice that rate and we are actively narrowing the ICP" is more credible than a brief that presents aggregate numbers and asserts that everything is working.
The five components of a PMF Evidence Brief
Each component addresses a different dimension of PMF. Together they form a picture that is specific enough to evaluate and honest enough to survive due diligence.
Segment definition
The segment definition characterises your ICP along two axes: firmographics (company size, industry, role of the primary user and economic buyer) and behavioural characteristics (how they use the product, what job-to-be-done they are primarily solving, what trigger prompted them to look for a solution). The definition must be narrow enough to be falsifiable — meaning you can say, for any given prospect, whether they are in or out of the segment. "B2B SaaS companies" is not a segment definition. "B2B SaaS companies between 20 and 150 employees, post-product-market-fit, with a dedicated sales team and a CS-to-AE ratio above 0.5, whose primary job is reducing time-to-value in the first 30 days of onboarding" is a definition you can test a prospect against.
Retention evidence
Retention evidence means cohort data at 30, 60, and 90 days, segmented by ICP fit. The overall churn rate is less useful than the churn rate for your best segment, because the overall number is a mix of your ICP customers (who may retain very well) and off-ICP customers (who may churn at a high rate). If you do not yet have segment-level retention data, explain the proxy you are using — for example, a manual classification of your existing customer base into strong-fit and weak-fit cohorts, even if done retrospectively.
JTBD documentation
The three to five dominant jobs-to-be-done, sourced from coded sales calls or structured customer research rather than team recall. Include frequency counts or the percentage of customers for whom each job is primary. The goal is to show that the JTBD mapping is empirical — derived from a systematically coded corpus rather than from what the product team believes customers care about. The frequency distribution matters as much as the list itself: for example, a JTBD that appears in 65% of coded calls is a different planning input than one that appears in 18% — these are illustrative figures; the actual distribution depends on your call sample.
Competitive differentiation evidence
Where you win and why, evidenced from deal data rather than positioning documents. This means win/loss analysis: the rate at which specific competitors appeared in sales calls, which features were cited in competitive wins, which objections were raised when you lost. A competitive differentiation section that says "we win on ease of use and customer support" without data to support it is an assertion. One that says "in the deals where Competitor X appeared in the evaluation, we won at a majority rate, and the primary reason cited in win calls was the configuration-free setup" is evidence.
Expansion signal
Any evidence that value compounds over time: seat expansion, feature upsell, referrals from retained customers, NPS promoters who have actively referred new accounts. Even one of these at a small scale signals that the value proposition is working well enough to generate word of mouth or expand within existing accounts — both of which are early indicators of a repeatable growth motion. The absence of expansion signal does not disqualify a PMF claim, but its presence strengthens it considerably.
How to build each component with imperfect data
Most pre-Series A companies do not have clean segment-level retention data, a systematically coded call corpus, or a formal win/loss programme. The PMF Evidence Brief does not require any of these to be perfect — it requires them to be honest about their limitations and specific about what the data actually shows.
Acceptable proxies for each component:
- Retention with a small sample. A cohort of 20–30 customers in the target segment is typically enough to show a retention pattern that is directionally meaningful. Label the sample size explicitly. If the cohort is small, note it and explain what you would expect the pattern to look like with a larger sample.
- JTBD from a small call corpus. Coding 20+ sales calls provides enough data to identify which jobs appear in a majority of conversations and which are rare. Fewer than 20 calls is directional at best — present the findings as preliminary hypotheses rather than confirmed frequency distributions.
- Competitive data from mention frequency. If you do not have a formal win/loss analysis, explicit competitor mentions in 10+ sales calls constitute evidence of a competitive pattern. Tally the mentions, note the context, and present it as the available evidence rather than as a comprehensive competitive analysis.
- Expansion signal at small scale. A handful of referrals or a seat expansion in the past quarter does not confirm a growth motion — but it is worth reporting as an early indicator, with appropriate framing about sample size and the causal interpretation.
The principle throughout is specificity over completeness. A brief that presents small but specific data — "17 of our 22 ICP-fit customers have been active for more than 90 days, compared to 8 of the 18 customers we now consider off-ICP" — is more useful to an investor than a brief that presents aggregate numbers at scale but provides no segmentation.
Common mistakes that signal weak PMF to investors
Some presentation choices signal, to an experienced investor, that the team either does not have PMF evidence or does not understand the difference between evidence and assertion. These are not fatal in isolation, but they change the tone of the conversation that follows.
| Mistake | What it signals to investors | Better alternative |
|---|---|---|
| Citing overall churn rate without segmentation | The team does not distinguish between ICP and non-ICP customers, or the ICP cohort does not retain better than the aggregate | Present churn for the ICP segment specifically, with the non-ICP segment shown separately for context |
| Citing aggregate NPS as primary PMF evidence | The team is using a sentiment measure as a substitute for a retention measure | Use NPS as a supplementary signal; lead with cohort retention data for the ICP segment |
| "Our customers love us" without supporting data | The claim is unfalsifiable and signals that the team is working from anecdote rather than evidence | Cite the specific signals: expansion rate, referral rate, retention cohort, NPS by segment — and let the data speak |
| Growth rate without retention cohort | Revenue growth can mask a leaky bucket — investors will ask about retention, and not having a cohort view ready signals the team has not looked at it | Present growth and retention together; explain how the cohort view reinforces (or contextualises) the revenue trajectory |
| ICP defined as a broad vertical | The ICP is not specific enough to be actionable, which suggests the team has not done the segmentation work to identify where fit is strongest | Define the ICP with both firmographic and behavioural characteristics, narrow enough that any given prospect can be assessed as in or out |
Data-Driven PMF Validation
In the PMF Validation cohort, you build the evidence base for your product's fit against real sales call data and usage data. You leave with a coded call corpus, a JTBD frequency map, and a PMF Evidence Brief your team can put in front of investors.
The brief as a living document
A PMF Evidence Brief is not a static fundraising artefact. It should be updated quarterly as the underlying data changes — new retention cohorts complete their 90-day window, additional calls are coded, win/loss data accumulates, and the segment definition is refined based on which customers are expanding and which are churning.
The process of building the brief has a secondary value that is separate from its use in fundraising. Trying to evidence PMF reveals gaps. You discover that you have never run a segmented cohort analysis. That your JTBD documentation is based on what the founding team believes rather than what a coded call corpus shows. That your competitive win/loss data exists only in the memory of the sales team. These gaps are not embarrassments — they are the research agenda for the next quarter. The brief is a forcing function for the work that closes them.
Teams that update the brief quarterly also find that it becomes a useful internal alignment tool. Disagreements about ICP definition, about which JTBD should anchor the product roadmap, and about where the competitive moat actually sits — all of these become more tractable when there is a shared evidence base to refer to, rather than competing intuitions from different functions.
The goal is not a document that makes a persuasive case. It is a document that makes an accurate one — and is updated often enough that it stays accurate as the company evolves.
Frequently asked questions
How much retention data do I need before building a PMF Evidence Brief?
There is no minimum that makes a brief valid or invalid — the brief is built with whatever data you have, with honest annotation of sample sizes and confidence levels. A cohort of 20 to 30 customers in your target segment is typically enough to show a retention pattern that is directionally meaningful, even if not statistically conclusive. The key is to segment by ICP fit before reading the retention data. An aggregate churn number from 100 mixed customers is less useful than a cohort retention curve from 25 customers who match your ICP definition.
What's the minimum sample size for the JTBD frequency analysis?
Patterns in jobs-to-be-done data typically begin to stabilise after coding 20 to 30 calls. With fewer than 20 coded calls, the frequency counts are directional at best — useful for forming hypotheses but not for making confident claims about which JTBD is dominant. At 40 to 50 coded calls, the top two or three jobs are usually clear enough to present with confidence. If you have fewer than 20 calls available, acknowledge the sample size explicitly and present the JTBD findings as preliminary rather than as established frequency distributions.
Is a strong NPS score sufficient evidence of PMF?
NPS alone is not sufficient evidence of PMF, for two reasons. First, NPS is a sentiment measure, not a retention measure — a customer can score you a 9 and still churn if the value they expected does not materialise. Second, aggregate NPS scores mask segment-level variation. A score of 45 across all customers can conceal a situation where your ICP segment scores 70 while a poor-fit segment scores 20. Use NPS as a supplementary signal alongside retention cohort data, not as a primary PMF indicator.
How do I present PMF evidence if my retention data is mixed?
Mixed retention data is best presented as a segmented picture rather than an aggregate. If your overall retention is moderate but your ICP segment retains strongly, lead with the ICP cohort data and be explicit that the aggregate is lower due to off-ICP customers in the base. Investors who fund at Series A understand that mixed retention often reflects ICP definition work in progress rather than a broken product — the question is whether the strong cohort is large enough and clear enough to build a company on. What damages credibility is presenting aggregate numbers without segmentation, which looks like either ignorance of the problem or an attempt to obscure it.
Build the PMF evidence base your investors are actually asking for.
The PMF Validation cohort applies the sales call coding methodology to your real call corpus and produces a JTBD frequency map, segmented retention analysis, and a PMF Evidence Brief your team can act on.