TL;DR

  • Most analytics implementations produce decoration, not decisions. Dashboards get built, checked briefly, and ignored — because the events tracked answer the wrong questions.
  • A JTBD-focused event taxonomy changes the foundation: every tracked event is designed to answer a job question, not record a click. 114 events specified, prioritised across P0/P1/P2, each connected to a downstream decision.
  • Connecting PostHog to Stripe and Zendesk in one view surfaces what no single source can show alone — product usage alongside billing events and support ticket frequency, revealing at-risk customers weeks before cancellation.
  • Once Stripe data was integrated, the data showed 2.7% monthly churn, 17.3% trial conversion, and 36% of cancellations happening in the first 90 days — a specific, measurable problem that instrumentation alone would not have found.
  • In-app interventions triggered by specific behavioural conditions outperform generic email sequences because they fire at the moment the at-risk behaviour appears — not on a calendar schedule.

Your team has an analytics tool. You built dashboards. You reviewed them in the weekly product meeting for a few weeks, then — quietly — stopped checking.

This is not a discipline problem. It is an architecture problem.

When analytics is built as a reporting system, it competes for attention against every other priority in the building. When it is built as an action system, it doesn't need to compete — because it triggers actions automatically, fires in the right context, and surfaces signals only when they require a response.

The difference between the two is not the tool. It is the design philosophy behind the instrumentation.

"Most analytics implementations end at a dashboard nobody opens. The better ones end at a saved customer."

In a recent engagement with a HIPAA-compliant healthcare SaaS platform, we built the full chain: a JTBD-focused event taxonomy, PostHog dashboards connected to Stripe and Zendesk data, behavioural churn signal definitions, and an in-app intervention system that fires when users demonstrate at-risk patterns. The result was 13 production dashboards and 118+ charts — all decision-ready rather than decorative, and all connected to a downstream action.

This article explains the architecture of that system, how each layer connects to the next, and what it requires to work.

Why Most Analytics Implementations Fail

Analytics implementations typically fail for one of three reasons — and often for all three simultaneously.

  1. The events tracked answer questions nobody is asking. Teams copy event names from other products, add analytics because investors expect it, or implement what is technically convenient. Nobody mapped which business question each event is supposed to answer. The data fills up. The insights don't arrive.
  2. The dashboards are siloed from the data that would make them useful. Product behaviour without revenue context is interesting. Product behaviour alongside subscription events and support ticket frequency is actionable. Most implementations stop before making those connections.
  3. There is no downstream trigger. Even a well-designed dashboard requires a human to notice a signal, decide it is significant, and take action. That chain breaks constantly. Systems that trigger automatically — at the moment the signal fires, in the right channel — don't depend on the chain holding.

The fix is not better dashboards. It is rethinking what analytics is for.

What Is the Analytics-to-Action Pipeline?

The analytics-to-action pipeline is an instrumentation and intervention architecture built around one constraint: every tracked event must have a downstream use.

If a tracked event cannot be connected to a decision, a trigger condition, or a metric someone is accountable for — it should not be tracked. This constraint forces clarity at the design stage rather than producing hundreds of events that nobody knows how to interpret.

The full pipeline has six layers. Each one depends on the quality of the one before it.

The 6-Layer Pipeline
1
JTBD-Focused Event Taxonomy Events organised around jobs customers are trying to accomplish, not UI interactions. Every event answers: "Does this tell us if users are getting value?"
2
Decision-Ready Dashboards Each dashboard built around a specific business question, not a feature area. Checked when the question changes, not on a Monday morning habit.
3
Revenue + Support Data Integration Stripe subscription events and Zendesk support data surfaced alongside product behaviour — the combination that reveals at-risk customers before cancellation happens.
4
Behavioural Churn Signal Definitions Specific event patterns that correlate with cancellation risk, with enough lead time to intervene. Not "low engagement" — precise, measurable conditions.
5
Intervention Trigger Matrix A documented map of which signal fires which intervention, in which channel, at what timing, with which message variant. Built on B.J. Fogg's Behavior Model (B = Motivation × Ability × Prompt).
6
In-App + Email Intervention Deployment Modals, banners, and email sequences that fire automatically when a user enters a trigger condition — at the moment the behavioural signal appears, not on a schedule.

Step 1 — Design the Event Taxonomy Around Jobs, Not Clicks

The most consequential decision in any analytics implementation is what to track. Most teams default to tracking everything that fires — every button click, every page view, every API call — and try to extract signal from the noise afterward.

A jobs-to-be-done event taxonomy works differently. It starts with "what is the customer trying to accomplish?" and works backward to the events that tell you whether they accomplished it.

In practice, this means organising events by job rather than by UI component. Instead of form_builder_save_button_clicked, the event is form_published with properties that answer: how many fields, how long did it take, did they use a template or build from scratch? The event name reflects the outcome. The properties enable the segmentation.

In the engagement described here, we specified 114 events across three priority levels:

  • P0 — 24 Core Activation events: the events that tell you whether a new user has reached the first value moment. These are tracked first because they determine whether anyone converts from trial to paid.
  • P1 — 38 Feature Adoption events: the events that tell you which jobs users are completing and how often. These determine retention forecasting and roadmap priority.
  • P2 — 15 Reporting and Settings events: secondary feature usage patterns. Important for expansion and tier fit, but not blocking on activation tracking.

For each event, properties were specified alongside the event name — not as an afterthought. An event without properties is a signal without context: you know something happened, but you cannot segment by plan tier, user role, or organisation size. Properties are what make events analytically useful.

Step 2 — Build Dashboards That Answer Real Questions

A dashboard is a decision support tool. If it does not support a specific recurring decision, it is decoration.

Before building any dashboard, the design question is: what decision will someone make differently because of this dashboard, and how often? If the answer is vague ("to understand activation better"), the dashboard will be checked once and forgotten. If the answer is specific ("to decide which onboarding cohort needs a success intervention this week"), it will be checked every time that decision is made.

The 13 production dashboards built in this engagement were each anchored to a specific decision context: activation funnel (where does the first-90-days drop-off happen?), feature adoption by persona (which jobs are power users completing that average users aren't?), revenue health (what is current MRR, churn rate, and failed payment trend?), and churn leading indicators (which accounts show the usage patterns that precede cancellation?).

Step 3 — Connect Revenue and Support Data to Product Behaviour

Product usage data alone tells you what users are doing. It does not tell you whether they are paying, at risk of cancelling, or generating support tickets that predict churn.

Connecting Stripe subscription data to PostHog's Data Warehouse changes the questions the platform can answer. Instead of "how many users completed this workflow," you can ask "how does workflow completion rate differ between users who cancelled within 90 days and users still active 12 months later?"

In the Stripe analytics built for this engagement — 33 live charts across 6 dashboards — the integrated data revealed a specific structural finding: 2.7% monthly churn (~28% annualised), with 36% of those cancellations happening in the first 90 days. At that churn rate, roughly 86 out of every 100 new subscribers are just replacing someone who left.

2.7%

Monthly churn rate identified once Stripe data was connected to product analytics. At this rate, ~28% of subscribers cancel annually. The Stripe integration also surfaced that 36% of cancellations happen in the first 90 days — pointing directly to the onboarding funnel as the highest-leverage intervention target.

The Zendesk layer adds a third dimension: support ticket frequency and type as a friction signal. Users who open multiple tickets about the same feature area in their first 30 days are encountering a problem the product hasn't resolved. That friction, if unaddressed, precedes cancellation — but it is invisible unless support data is in the same analytical view as product behaviour.

Step 4 — Define Behavioural Churn Signals

Churn prevention fails when it is reactive — when the intervention happens after cancellation intent is clear, or after the subscription is already cancelled. To be useful, churn signals need lead time.

Behavioural churn signals are specific event patterns that precede cancellation with enough runway to allow intervention. They are not generic "low engagement" flags — they are precise conditions defined from the data: visited the cancellation page, deleted 3+ items in a session, used only 1 of the available features for 7 days with no expansion, logged in fewer than 3 times in a 14-day window.

The distinction between signal types matters for intervention design. B.J. Fogg's Behavior Model provides the framework: a user with low ability (stuck, confused, not reaching value) needs a different intervention than a user with low motivation (getting value but not feeling it is worth the cost). A single generic intervention addresses neither well.

Six behavioural states were defined in the trigger architecture built for this engagement:

State Definition Fogg State Intervention Type
Power User 2+ features active, daily logins, high volume High Motivation, High Ability Spark — expand to next feature
Active User 1 feature, weekly logins, moderate volume Moderate Motivation, Moderate Ability Facilitator — activate second feature
Stalled User Set up but <5 actions in 14 days Moderate Motivation, Low Ability Facilitator — reduce friction
Dormant User 14+ days no login, no key actions Low Motivation, Low Ability Signal — reactivate value awareness
Trial User Days 0–14, exploring the platform High Motivation, Low Ability Facilitator — guide to first value moment
Churning User Visited cancel page, deleted items, or downgrade signals present Low Motivation, High Ability Signal — loss aversion, personalised offer

Step 5 — Build the Intervention Trigger Matrix

Once churn signals are defined, the trigger matrix documents which signal fires which intervention, in which channel, at what timing, and with what message. This is the connective tissue between the analytics layer and the customer-facing intervention layer.

Without a trigger matrix, interventions are improvised — a CS manager notices something in the dashboard and decides to send an email, inconsistently, depending on their availability. With a trigger matrix, the system fires the right message automatically when the behavioural pattern appears.

Examples from the trigger matrix built in this engagement:

  • Trigger: visited cancellation page → In-app modal fires within 5 minutes → Personalised retention offer based on the cancellation reason selected → Target: 50% retention of users who engage with the modal
  • Trigger: 7 days with only 1 feature active → Email at Day 7, 10 AM → Second feature activation with usage-based proof point → Target: 35% activate second feature
  • Trigger: 3+ delete actions in a session → In-app modal + SMS → Concerned outreach with loss aversion framing → Target: 25% stop the deletion behaviour

Twenty trigger scenarios were designed for this engagement. The top 12 were built and deployed in the first implementation phase.

Step 6 — Design and Deploy In-App Interventions

Email is the default intervention channel because it is easy to deploy. It is also the weakest channel for users actively demonstrating churn behaviour — because by the time the email arrives, the context has passed.

In-app interventions fire in context: when the user is on the cancellation page, when they have just deleted several items, when they have crossed the 80% usage threshold. The intervention appears at the moment of highest relevance.

The key design principle is specificity. A modal that says "Are you sure you want to cancel?" is generic. A modal that shows the user their own usage data — "You sent 340 forms last month. On the starter plan, overage charges would be $47/month, saving you only $3" — uses the user's actual behaviour as the argument.

Effective intervention design also branches by cancellation reason — different retention offers for users who cancel due to cost versus users who cancel because they found a competitor. A single script that doesn't branch addresses neither well.

"The strongest churn intervention uses the user's own behaviour data as the argument. Not 'here's why our product is good' — but 'here's what you've actually built with it, and here's what you'd lose.' The data is already there. The intervention just needs to surface it at the right moment."

Jake McMahon, ProductQuant

What This Looked Like in Practice

The engagement described here was with a HIPAA-compliant healthcare SaaS platform — a form-builder and patient communication tool used by multi-location medical practices across the US. The starting point was fragmented instrumentation: events firing from a legacy tool, no systematic taxonomy, no revenue data in the analytics view, no defined churn signals.

Over four months, the full pipeline was built:

  • Platform selection: PostHog Cloud selected as the analytics platform. Cost: ~$2,000–4,000/yr at standard pricing, versus $20,000–50,000+/yr for the enterprise contracts required at Mixpanel or Amplitude to include both HIPAA Business Associate Agreement coverage and group analytics. Ongoing savings: $16,000–46,000/yr.
  • Event taxonomy: 114 events specified across P0/P1/P2 priority levels, with HIPAA-compliant architecture (staff IDs, not patient IDs; no protected health information in any event property). A 17-page technical implementation guide delivered to the development team.
  • Dashboard delivery: 13 production dashboards, 118+ charts — each built around a specific decision context rather than a feature area.
  • Stripe revenue intelligence: 33 live charts across 6 dashboards connecting subscription data to product behaviour. Key findings: 2.7% monthly churn, 17.3% trial conversion, $131 average revenue per user, 63–74% 12-month cohort retention depending on cohort.
  • Churn signal architecture: 6 behavioural states defined, 20 trigger scenarios designed, 12 of top 20 implemented in the first phase.
  • Intervention system: In-app modals and email sequences mapped to specific trigger conditions, using the user's own usage data as the intervention content.

The Stripe analysis surfaced a finding that reframed the entire product priority conversation: 36% of cancellations were happening in the first 90 days, and the trial conversion rate was 17.3%. More than 4 in 5 trial users were leaving. The most expensive customer acquisition problem in the business was not top-of-funnel — it was the first three months after sign-up.

That finding was only possible because product behaviour data and billing data were in the same analytical view. Neither source alone would have surfaced it.

What This Approach Requires

The analytics-to-action pipeline is not a weekend project. It has real dependencies — on data access, development capacity, and the organisational willingness to act on what the signals surface.

On the data side:

  • Product event data via a platform that supports group analytics and data warehouse integration (PostHog, Amplitude, or equivalent)
  • Billing data (Stripe or equivalent) — required for churn signal definition and cohort retention analysis
  • Support data (Zendesk or equivalent) — optional but adds friction signal identification

On the instrumentation side:

  • A developer who can implement the event tracking plan — and enough implementation time to do it properly rather than reverting to the easiest events to fire
  • Event properties specified alongside events — because events without properties cannot be segmented
  • A QA process to validate that events are firing correctly and properties are being captured

On the intervention side:

  • An in-app messaging tool that can target by user property and event behaviour (Chameleon, Intercom, or equivalent)
  • Someone responsible for monitoring signal accuracy and adjusting logic when false-positive rates change

This approach makes the most sense for SaaS products with a meaningful subscriber base where a measurable reduction in monthly churn translates into significant retained revenue. At 2.7% monthly churn, reducing churn by 1 percentage point per month is often worth more than any individual feature launch.

Analytics Audit

Is your analytics setup producing decisions or decoration?

The Analytics Audit maps your existing instrumentation against the questions your team actually needs to answer — and identifies the highest-leverage gaps before they become expensive to fix.

Frequently Asked Questions

What is a JTBD-focused event taxonomy?

A jobs-to-be-done (JTBD) event taxonomy organises tracked events around what users are trying to accomplish — their jobs — rather than which UI elements they interact with. Instead of tracking button_clicked or page_viewed, JTBD-focused events track outcomes: form_published, first_submission_received, template_deployed. Each event is designed to answer "did the user make progress on the job they hired the product to do?" This produces cleaner data because every event has a clear business meaning — and it makes retention analysis tractable because you can directly compare event completion rates between retained and churned cohorts. The companion article The Compound Research Stack covers how JTBD research informs which jobs to build the taxonomy around.

How do you connect Stripe data to PostHog?

PostHog's Data Warehouse feature includes a native Stripe connector that syncs subscription, customer, invoice, and charge data directly into the PostHog environment. Once synced, this data is queryable alongside product event data using HogQL (PostHog's SQL-compatible query language). This enables queries that cross the product/billing boundary — filtering activation funnel analysis to users who subsequently cancelled, or comparing feature adoption rates between annual and monthly subscribers. The connector requires a PostHog Cloud account and Stripe API credentials; setup takes a few hours. Note: one implementation issue encountered in the engagement described here was charge deduplication — PostHog's Stripe connector syncs each charge row twice, so revenue queries require a DISTINCT or deduplication filter to produce accurate totals.

How early can behavioural churn signals fire?

Lead time depends on how precisely the signal is defined and how early in the customer lifecycle the warning behaviour first appears. Signals built around session frequency patterns (fewer than 3 logins in 14 days, no key actions in a 7-day window) can fire weeks before a user reaches the cancellation page. Signals built around specific intent actions (visited cancel page, deleted multiple items) fire with less lead time but higher precision. The most effective churn programmes combine both: early-warning signals for intervention while there is time to address friction, and late-stage signals for higher-urgency retention offers. In the engagement described above, 36% of cancellations happened in the first 90 days — pointing to onboarding signals as the highest-leverage target.

What does an in-app intervention look like?

An in-app intervention is a modal, banner, or tooltip that fires when a user enters a trigger condition — typically implemented via a tool like Chameleon, which can target by PostHog user properties and event behaviour. Effective interventions are personalised to the user's specific situation rather than generic. Instead of "Don't cancel," a modal that shows the user their own usage data ("You've sent 340 forms this month — here's what you'd lose at the lower tier") addresses the specific decision in progress. The intervention branches based on the cancellation reason selected, so each path presents a relevant retention offer. The combination of contextual timing (firing at the moment of at-risk behaviour) and personalised content (using the user's own data) is what differentiates in-app intervention from standalone email sequences.

Does this pipeline work without PostHog specifically?

Yes. The architecture works with any analytics platform that supports event tracking with properties, group analytics for organisation-level analysis, and data warehouse integration or API access for revenue data. PostHog was selected in this engagement because it is the only platform that includes HIPAA BAA coverage and group analytics at standard pricing (~$2,000–4,000/yr) — whereas Mixpanel and Amplitude require enterprise contracts ($20,000–50,000+/yr) for equivalent compliance and feature coverage. For teams outside regulated industries where HIPAA compliance is not required, Amplitude, Mixpanel, or June.so are viable alternatives depending on team size and event volume.

Sources

  • Fogg, B.J. (2009). A Behavior Model for Persuasive Design. Proceedings of the 4th International Conference on Persuasive Technology. bjfogg.com
  • Ulwick, A.W. (2016). Jobs to be Done: Theory to Practice. IDEA Bite Press.
  • PostHog. HIPAA Compliance Documentation. posthog.com/docs/privacy/hipaa-compliance
  • PostHog. Data Warehouse — Stripe connector setup. posthog.com/docs/data-warehouse
  • Chameleon. In-app experience platform for user onboarding and engagement. chameleon.io
  • All engagement statistics (churn rates, dashboard counts, cost figures, event counts, cohort retention percentages, trial conversion rates) are from ProductQuant's work with a HIPAA-compliant healthcare SaaS platform. Platform and client names are anonymised.
Jake McMahon

Jake McMahon

Jake is a product analytics and retention specialist with 8+ years building growth systems for B2B SaaS. He designs JTBD-focused event taxonomies, builds PostHog implementations from scratch, and develops behavioural churn intervention architectures across healthcare, HR, and fitness verticals. Recent engagement work includes 114 events specified and prioritised, 13 production dashboards delivered, and Stripe revenue intelligence connected to product behaviour to surface leading churn indicators. He leads analytics and research engagements at ProductQuant.

See the Analytics Audit →

Analytics Audit

From decoration to decisions

If your current analytics isn't producing the decisions described in this article, the Analytics Audit is where that conversation starts — mapping your existing instrumentation against the questions your team actually needs to answer.

See the Analytics Audit →