TL;DR
- A churn analysis in PostHog has 4 steps: (1) Define who churned and when. (2) Compare their behavior to retained users. (3) Identify the behavioral signals that differ. (4) Build a health score dashboard from those signals.
- Step 1: Create a cohort of churned users — users who were active, then stopped using the product for 30+ days (or cancelled their subscription). Create a second cohort of retained users — users who are still active after the same tenure.
- Step 2: Compare the two cohorts across key behaviors: activation event completion, feature adoption depth, login frequency trend, support ticket patterns, and engagement velocity. A 40% drop in weekly logins predicts churn with 78% accuracy.
- Step 3: Build a health score from the top 3–5 signals. Each signal gets a weight based on how strongly it correlates with churn. The health score updates weekly and flags accounts that drop below a threshold.
- Step 4: Build a dashboard with (a) the health score distribution, (b) the at-risk account list, (c) the signal trend over time, and (d) a retrospective validation.
- This entire analysis takes 2–4 hours in PostHog if your event tracking is clean. If your tracking is messy, spend 1–2 weeks cleaning it first — otherwise the analysis is garbage in, garbage out. If you need the instrumentation cleaned up and the health score system built together, that is the scope of a structured PostHog consulting engagement.
Step 1: Define Churned vs. Retained Users
Create the Churned Cohort
In PostHog, click Cohorts → New cohort:
- Filter: Users who were active (completed any event) in the period 60–90 days ago AND have NOT been active in the last 30 days.
- Name: "Churned — 30+ day inactive"
If you have subscription data tracked as events or properties, refine this:
- Additional filter:
subscription_status = 'cancelled'ORsubscription_status = 'past_due'
Or use HogQL for more precision:
Create the Retained Cohort
- Filter: Users who were active in the period 60–90 days ago AND have been active in the last 7 days.
- Name: "Retained — Active in last 7 days"
Sanity Check
- Churned cohort size: Should be 15–30% of total users in a healthy B2B SaaS product. At the SMB level, 3–7% monthly logo churn is normal. At Mid-Market, 1–2%. At Enterprise, <1%.
- Retained cohort size: Should be 70–85%.
- If churned is above 40%, your product has a retention problem, not a churn prediction problem. No dashboard will fix that — you need to fix activation and value delivery first.
Step 2: Compare Behaviors
For each of these behaviors, create an insight that compares the churned cohort to the retained cohort. The behaviors with the largest gaps are your churn signals.
Behavior 1: Activation Event Completion
Insight: Funnel from signup_completed → activation_completed
- Filter by cohort: Churned vs. Retained
- Expected result: Churned users complete activation at a significantly lower rate.
Real example: In our analysis of a healthcare SaaS client, churned users completed activation at 22% vs. 67% for retained users — a 45 percentage point gap. Fixing that gap through guided onboarding was the single biggest lever in our 23% churn reduction in 90 days.
Behavior 2: Feature Adoption Depth
Insight: Trends insight showing the number of distinct features used per user in the first 30 days.
- Filter by cohort: Churned vs. Retained
- Expected result: Churned users use fewer distinct features in their first 30 days.
Real example: In an HR platform engagement, we found that retained users used an average of 8.3 distinct features in their first 30 days, while churned users used 2.1. The gap wasn't just in volume — it was in which features. Churned users clustered on reporting features but never configured integrations, which is where the product's real value lives.
Behavior 3: Login Frequency Trend
Insight: Trends insight showing login events per user per week, from signup to churn.
- Filter by cohort: Churned vs. Retained
- Expected result: Churned users show a declining login frequency trend starting 2–4 weeks before churn. Retained users show stable or increasing frequency.
A 40% drop in weekly logins predicts churn with 78% accuracy. This is the single strongest individual signal you'll find.
HogQL query to calculate login trend:
This query finds every user whose login frequency dropped more than 40% compared to 4 weeks prior. These are your "quiet quitters" — the churn archetype where engagement decays gradually without any obvious trigger.
Behavior 4: Support Ticket Patterns
Insight: Trends insight showing support_ticket_created events per user per week.
- Filter by cohort: Churned vs. Retained
- Expected result: Two distinct patterns:
- Frustration spike: Churned users create 3+ support tickets in their last 2 weeks before churning. They're trying to make the product work and failing.
- Silent disengagement: Churned users create zero support tickets in their last 30 days. They've given up.
Top churn drivers include poor onboarding (23% of churn), weak relationship management (16%), and bad customer service (14%). The support ticket pattern tells you which driver is in play.
Behavior 5: Engagement Velocity
Insight: Trends insight showing the week-over-week change in total events per user.
- Filter by cohort: Churned vs. Retained
- Expected result: Churned users show a negative engagement velocity (declining activity) starting 2–6 weeks before churn. Retained users show stable or increasing velocity.
Engagement velocity is the difference between "using the product occasionally" and "stopping using the product." The timing of the decline tells you how much warning you'll have.
Step 3: Build the Health Score
Take the top 3–5 behaviors with the largest gaps between churned and retained users. Build a composite health score from those signals.
The Scoring Model
For each user or account, calculate weekly:
Each component is worth 25 points (100-point scale). Negative scores are possible if all signals are bad — that's intentional. A deeply unhealthy account should score below zero.
Score ranges and actions:
| Score | Status | Action |
|---|---|---|
| 75–100 | Healthy | No action needed |
| 50–74 | Watch | Monitor weekly, flag to CSM |
| 25–49 | At-risk | CSM outreach within 48 hours |
| 0–24 | Critical | Immediate intervention, executive escalation |
| <0 | Terminal | Post-mortem analysis, win-back campaign |
Building This in PostHog with HogQL
- Create a SQL insight (HogQL) that calculates each component per user or group.
- Sum the components into a
health_scoreproperty. - Save the output as a dashboard table or export to your CS tool.
- Set up a scheduled query to update weekly.
The health score should connect to your churn archetypes. When a user drops below 50, your system should identify which archetype they match:
| Archetype | Signal Pattern | Intervention |
|---|---|---|
| The Unactivated | Low activation, low features | Guided walkthrough + 30-min onboarding call |
| The Narrowing | Declining feature breadth | Value review + feature discovery |
| The Frustrated | Support ticket spike | Escalation to senior CSM |
| The Quiet Quitter | Engagement velocity decline | Re-engagement campaign |
| The Budget Squeeze | Seat reduction + downgrade signals | ROI analysis |
| The Competitor Switch | Billing signal + engagement drop | Competitive briefing |
This mapping turns a generic "at-risk" alert into a specific playbook. The CSM doesn't need to diagnose — they just execute the matched intervention.
Step 4: Build the Churn Diagnosis Dashboard
Your dashboard should have 4 panels:
Panel 1: Health Score Distribution
A histogram showing the distribution of health scores across all active users. How many are in each bucket (Healthy, Watch, At-risk, Critical)?
What to look for: If the At-risk and Critical buckets are growing week over week, your product is losing product-market fit for at least one segment. This is a leading indicator — it tells you churn is coming before it happens.
Panel 2: At-Risk Account List
A table of accounts with health scores below 50, sorted by score (lowest first). Columns: account name, health score, top signal driving the score, CSM owner, churn archetype.
What to look for: The account list should be actionable. If a CSM looks at it and doesn't immediately know what to do, the score is a vanity metric. Every score should connect to a specific intervention playbook.
Panel 3: Signal Trend Over Time
A line chart showing how many accounts are in each health bucket each week. Is the "At-risk" bucket growing or shrinking?
What to look for: A growing At-risk bucket that isn't matched by growing intervention activity means your system is detecting but not acting. Detection without intervention is dashboard decoration.
Panel 4: Churned User Retrospective
For users who churned last month, what was their health score 30 days before churn? This validates whether the health score actually predicted churn.
What to look for: If 70%+ of churned users had a health score below 50 at least 30 days before churning, your score is predictive. If fewer than 50% were flagged, your signals are wrong — go back to Step 2 and find better differentiators.
The difference between a predictive health score (70%+ of churned users flagged 30+ days early) and a vanity score (<50% flagged) is the difference between a churn prevention system and dashboard decoration. The retrospective validation is the only test that matters.
Validation: Does Your Health Score Actually Work?
The most common mistake in health score design is building a score and never checking whether it predicts anything. Here's the validation workflow:
- Pick a historical period: Users who churned 3–6 months ago.
- Calculate their health score 30 days before churn.
- Measure: What percentage scored below 50? (Target: 70%+)
- Measure: What percentage of non-churned users scored below 50 in the same period? (Target: <20% — this is your false positive rate.)
- If false positive rate >30%, your CS team will stop trusting the score. Tighten the thresholds.
A health score that can't trigger an intervention within 24 hours is a vanity score. The score exists to drive action, not to look pretty on a dashboard.
FAQ
How often should I run the churn analysis?
Weekly for the health score update. Monthly for the full cohort comparison (Step 2). The cohort comparison tells you whether your churn signals have changed over time — new features, new segments, or new pricing can all shift the behavioral patterns that predict churn.
What if my churned and retained cohorts look the same?
Your event tracking is too generic. "Page viewed" and "button clicked" don't differentiate between retained and churned users. You need business-specific events: report_generated, integration_configured, team_member_invited, value_metric_reached. These events capture actual product value, not just activity. 10–20 events tied to time-to-value milestones are enough to build a meaningful model.
Can I automate the health score update?
Yes. Set up a scheduled HogQL query that runs weekly and updates a health_score property on each person or group. PostHog's scheduled insights feature handles this. Alternatively, pipe the output to your CS tool (ChurnZero, Gainsight, Vitally) via their API.
Do I need machine learning for this?
No. The composite health score described above is a rules-based model that captures 80% of what a machine learning model would produce. ML adds incremental accuracy but significant complexity. Start rules-based. Add ML when you have 10K+ customers and a dedicated data scientist. Academic research shows that Random Forest models achieve AUC scores of 0.90 on clean B2B SaaS datasets — but those scores require clean data, feature engineering expertise, and quarterly retraining. The rules-based score gets you 80% of the way there with zero ML overhead.
How does this compare to Mixpanel or Amplitude?
PostHog's advantage is HogQL — you can write custom SQL queries that Mixpanel's UI can't handle. Mixpanel's cohort analysis requires clicking through their interface; PostHog lets you write the exact query you need. Amplitude offers behavioral cohorts but doesn't give you raw SQL access. For churn analysis specifically, the ability to calculate custom metrics like engagement velocity (week-over-week change) is easier in PostHog's SQL layer than in either competitor's UI.
Sources
- BuildMVPFast — Leading Indicators of Churn — 40% login drop = 78% churn accuracy.
- PipelineRoad — SaaS Churn Guide — Segment-specific churn benchmarks (SMB 3–7%, Mid-Market 1–2%, Enterprise <1%).
- Velaris — Churn Prediction Models — Top churn drivers: onboarding 23%, relationship 16%, service 14%.
- ProductQuant — PostHog Cohort Analysis Guide — How to build cohorts that actually drive decisions.
Get Your Churn Diagnosis Built in 2 Weeks
Behavioral health scoring, at-risk account list, and intervention playbook — deployed inside your existing PostHog setup. No ML required.