B2B CRM and sales engagement platform — ~$4M ARR, ~80–120 employees, Series B. The VP of Customer Success knew churn was too high. What they didn't know: the data to predict it was already there — it just wasn't being used.
The VP of Customer Success knew churn was a problem — roughly 18% of revenue churned every month. But the team had no way to see it coming. Cancellation was discovered when the cancellation email arrived. By then, the decision was already made.
What they didn't know: the product was generating usage data on every login, every feature interaction, every workflow run. That data contained clear leading indicators of churn — declining login frequency, skipped workflow steps, shrinking team size — but nobody had connected it to retention risk. The signal was there. The pipeline to interpret it was missing.
Worse: the support team spent 60% of their time on reactive churn management — scrambling to save accounts that were already in the cancellation queue. Without a risk scoring system, they couldn't prioritize. $2.1M in annual recurring revenue was walking out the door every year, and nobody knew which accounts to call next.
The team analyzed support ticket volume by account, assuming that more tickets correlated with higher churn risk. They built a dashboard showing ticket counts per customer.
Quarterly NPS surveys were deployed to measure customer sentiment. The CS team tracked score changes across accounts and attempted outreach based on NPS drops.
The VP of CS assigned team members to manually review the top 50 accounts each week, scoring health on a 1–5 scale in a shared spreadsheet.
Why it didn't work: All three approaches used the wrong data on the wrong time horizon. Support tickets and NPS are reactive signals — they arrive after the decision to cancel. Manual reviews can't scale beyond a tiny fraction of accounts. The product was generating real leading indicators every day, but nobody was listening to them.
Working through their data, the real problem was not the obvious guess. The VP of Customer Success assumed they needed a better CRM workflow. The data told a different story.
PostHog was tracking millions of events per month — page views, feature interactions, workflow executions, API calls — but none of these events were aggregated into a health score or churn risk signal. The data was being collected and stored, but it was never analyzed for patterns that precede cancellation. The team had a rich dataset and no way to extract signal from it.
A historical analysis of 24 months of churned accounts revealed clear behavioral patterns: declining login frequency starting 45+ days before cancellation, shrinking active seat count, and reduced feature breadth. These patterns were consistent across 80% of churned accounts — and nobody had ever looked for them. The cancellation data lived in Stripe. The behavioral data lived in PostHog. They had never been connected.
The CS team's weekly spreadsheet reviews covered fewer than 50 accounts out of 800+ active accounts. The scoring criteria were subjective: there was no repeatable definition of a healthy account. Two CS reps scoring the same account would regularly assign different scores. And by the time a score was entered, the usage data it was based on was already 5–10 days stale.
A four-phase intervention that transformed raw usage data into a daily churn prediction pipeline with automated retention playbooks.
Three-Tier Intervention Playbook
Before vs After metrics with quantified revenue impact.
We were tracking millions of data points every month and using none of them to predict churn. The model showed us which usage patterns led to cancellation 45 days before it happened. That window changed everything about how we allocate CS resources.
The most valuable churn data is already in your product. You just aren't reading it. This team had millions of PostHog events, a Stripe subscription history with clear churn labels, and two years of data — but the signals were never connected. 12 behavioral features predicted churn with 87% accuracy. All of them came from product usage data they were already collecting. The tooling was not the gap. The pipeline from data to decision was.
No more manual spreadsheets or subjective scoring. Every account gets a data-driven risk score based on real usage patterns. The top 20 highest-risk accounts arrive in Slack every morning.
Generic churn indicators from blog posts won't match your product. We find the specific usage patterns that lead to cancellation in your dataset — because every product has different churn signals.
Low-risk accounts get automated education. Medium-risk accounts get CS outreach. High-risk accounts get executive attention. Every tier has playbooks, triggers, and timelines.
10 years building analytics and growth systems for B2B SaaS at $1M–$50M ARR. BSc Behavioural Psychology, MSc Data Science. The most common analytics gap isn't bad data — it's missing data. Events never instrumented, properties never attached, funnels never connected. Finding what's absent is usually more valuable than analysing what's present.
A structured review of your product usage data, churn signals, and retention infrastructure — finding what's predictive, sizing the revenue at risk, and delivering a model that flags risk weeks before cancellation.
A 15-minute call is enough to know whether what we do is relevant to where you are. No pitch. Just a conversation about your specific situation.