TL;DR
- A metric hierarchy has three layers: North Star metric, leading indicators (3–5), and diagnostic metrics (used ad hoc to explain movement).
- Revenue is not a valid North Star for most B2B SaaS products — it measures extraction, not value delivered. A North Star must be a leading indicator of retention.
- Three dashboards serve three audiences: Company health (weekly, execs), Product health (daily, PMs), and Funnel diagnostic (ad hoc, investigation).
- The most common failure: too many metrics at the top layer, no metrics owner, and conflicting numbers from different tools reporting the same event differently.
Having metrics is not the same as having a hierarchy
The standard response to "we need better analytics" is more dashboards. A dashboard for marketing, a dashboard for product, a dashboard for customer success — each built independently, each reporting different definitions of the same events, each measuring something that matters to a function without connecting to what matters to the business.
The result is a company where everyone has data and nobody agrees on what the data means. The growth team celebrates DAU growth while the finance team flags flat revenue. The product team points to feature adoption while customer success flags rising churn. These are not contradictions — they are symptoms of a metrics architecture that was built sideways rather than vertically.
A metric hierarchy solves this by establishing a causal structure: one metric at the top that represents the core value exchange, a small set of leading indicators that predict it, and a deeper layer of diagnostic metrics that explain why the leading indicators moved. Every number in the business lives at one of these three levels, serves one of these three purposes, and connects upward to the layer above it.
North Star → leading indicators → diagnostic metrics. The discipline of a metric hierarchy is not in defining what to measure — it is in deciding what sits at each level and refusing to promote a diagnostic metric to executive visibility.
Layer 1: The North Star metric
The North Star metric is a single number that represents the volume of value your product delivers to customers. Not the revenue you extracted. Not the number of accounts you signed. The value the customer received — measured in units that are meaningful to the customer's problem, not to your P&L.
A North Star metric must satisfy three conditions. First, it must be a leading indicator of revenue: when it goes up, paid retention and expansion should follow. Second, it must be actionable — it must be something your product team can influence through decisions about features, onboarding, and product design. Third, it must be a number the whole company can understand without a data dictionary.
Why revenue alone is wrong
Revenue as a North Star is a measurement of past transactions, not future health. It answers the question "how much did customers pay us?" rather than "how much value did we deliver?" A product that is churning heavily but acquiring fast can show growing revenue for several quarters before the lagging signal catches up. By the time revenue reflects the churn problem, the churn problem is structural.
A better North Star for a B2B SaaS product is typically a measure of core workflow completion per active account: reports generated, workflows automated, integrations active, seats actively using the core feature set. These are leading indicators of account health and therefore leading indicators of renewal.
Choosing your North Star
Start with the core job the product does. What is the moment when a customer has unambiguously received value? Not signed up. Not logged in. The moment they completed the thing they came to do. That completion event — or a volume metric built on it — is typically the right starting point for a North Star.
| Product type | Candidate North Star | Why it works |
|---|---|---|
| Analytics / BI | Reports published per active account per week | Correlates with team-level adoption; passive logins don't inflate it |
| Workflow automation | Automations run per active account per month | Measures value delivered, not product opened |
| Collaboration / project management | Active projects with activity in last 14 days | Indicates ongoing organisational reliance |
| Data pipeline / integration | Successful syncs per connected integration per week | Infrastructure products: reliability is the value |
| Sales / CRM | Deals updated or advanced per active user per week | Reflects whether reps are working in the tool, not around it |
Layer 2: Leading indicators
Leading indicators are the 3–5 metrics that predict movement in the North Star before it moves. They are the levers your product and growth teams pull. When a leading indicator moves, the North Star should follow — with a lag of days or weeks. When the North Star drops without explanation, you look to the leading indicators first.
Leading indicators typically fall into one of four categories for B2B SaaS products:
- Activation rate: the percentage of new accounts that reach the activation event within a defined window (commonly 7 or 14 days). Activation is the prerequisite for the North Star. Accounts that do not activate do not reach the value that drives retention.
- Feature adoption breadth: the average number of distinct features or modules used per active account. Narrow adoption is a retention risk; broad adoption correlates with lower churn and higher expansion revenue.
- Weekly active accounts: accounts with at least one meaningful event in the last 7 days. This is a stickiness signal at the account level rather than the individual user level — more useful for B2B products where team-level adoption determines renewal decisions.
- Time-to-value: the median time from account creation to first completion of the activation event. Shortening this number is one of the highest-leverage interventions available to a product team.
- Expansion signal: accounts that added seats, upgraded tier, or activated an additional module in the last 30 days. Leading indicator of net revenue retention.
Layer 3: Diagnostic metrics
Diagnostic metrics explain why a leading indicator moved. They are not monitored continuously — they are investigated when something in layer 2 requires explanation. A drop in activation rate might be explained by a step in the onboarding funnel where users are dropping. A decline in weekly active accounts might be explained by a specific user segment that has stopped logging in. These explanatory metrics are diagnostic.
The critical error teams make with diagnostic metrics is promoting them to the executive dashboard. A team that surfaces onboarding step completion rates to the board is showing data that requires context the board does not have. The board's job is to understand whether the North Star is moving and why, at the leading indicator level. The product team's job is to understand why the leading indicators moved, at the diagnostic level.
Diagnostic metrics live in the funnel diagnostic dashboard, are used ad hoc, and are owned by the specific team member responsible for the area they describe.
The 3 dashboards
Each layer of the hierarchy corresponds to a dashboard with a specific audience, cadence, and decision type. Building three separate dashboards — rather than one dashboard with filters — is not an organisational formality. It is the structural mechanism that prevents diagnostic metrics from polluting executive conversations and prevents executive metrics from distracting product investigation.
Company health dashboard
Audience: founders, execs, board. Cadence: reviewed weekly. Decision type: strategic — is the business growing, holding, or declining, and is the trajectory sustainable?
This dashboard contains: the North Star metric (trended weekly, segmented by cohort or plan tier), 3–5 leading indicators, MRR or ARR with net revenue retention, and gross logo retention rate. It should not contain onboarding funnel steps, feature-level adoption breakdowns, or any metric that requires product context to interpret.
Product health dashboard
Audience: product managers, product designers, growth. Cadence: reviewed daily. Decision type: operational — which leading indicators moved, and does the movement require investigation or action today?
This dashboard contains: all layer 2 leading indicators trended daily, activation funnel completion rates by step, at-risk cohorts (accounts past day 7 without activation, accounts with declining login frequency), and recent experiment results. The product team uses this to decide what requires a deeper dive in the diagnostic dashboard.
Funnel diagnostic dashboard
Audience: whoever owns the specific problem being investigated. Cadence: ad hoc — opened when a leading indicator moves unexpectedly. Decision type: investigative — what is the specific cause of the movement, which segment is affected, and what is the hypothesis for the intervention?
This dashboard is assembled on demand from diagnostic metrics. It might contain a specific onboarding step's drop rate for a specific signup source, the feature adoption pattern of accounts that downgraded last month, or the support ticket themes from accounts that churned in a specific cohort. It is a workbench, not a monitoring surface.
Common mistakes in metric hierarchy design
Too many metrics at the top layer
A company health dashboard with 20+ metrics is not a dashboard — it is a spreadsheet. When everything is tracked at the executive level, the implicit message is that everything is equally important. The discipline of a metric hierarchy is exercised most visibly in what is excluded from the top layer, not what is included.
Metrics with no owner
A metric that nobody is accountable for will not be acted on. Every leading indicator in layer 2 and every diagnostic metric in layer 3 should have a named owner — not a team, a person — whose job includes monitoring the metric and responding when it moves. Ownerless metrics decay: they are not checked when they should be, they are not investigated when they drop, and they are not updated when the underlying definition changes.
Conflicting metrics from different tools
When a product analytics tool, a CRM, and a data warehouse all report slightly different numbers for the same event — because they have slightly different definitions of what constitutes an "active user" or what triggers an "activation event" — the organisation stops trusting any of them. The hierarchy requires a single source of truth for each metric at each layer, with the definition documented and versioned.
| Mistake | Symptom | Fix |
|---|---|---|
| Too many top-layer metrics | Executive meetings discuss data rather than decisions | Limit layer 1 to the North Star plus gross retention; move the rest to layer 2 |
| No metric owner | Metrics drift, are not investigated when they drop | Assign a named owner to every leading indicator; review ownership quarterly |
| Conflicting tool definitions | "The numbers don't match" — trust in data collapses | Define terms centrally; choose one tool as the source of truth per metric |
| Diagnostic metrics in exec dashboard | Board asks about onboarding step 3 drop rate | Separate dashboards with documented access; execs see layer 1, PMs see layer 2 and 3 |
Product Analytics for B2B SaaS
In the Product Analytics for B2B SaaS cohort, you build the metric hierarchy and three dashboards for your own product — against your real event data. You leave with the hierarchy documented, the dashboards built, and the ownership map assigned.
Frequently asked questions
What is a North Star metric and how do you choose one?
A North Star metric is a single number that best represents the value your product delivers to customers — and that correlates with long-term revenue health. It must be a leading indicator of retention and expansion, not a lagging indicator like revenue. Good North Star candidates measure customer value delivered: active projects completed, workflows automated, reports generated per active account. Revenue alone is not a valid North Star because it measures what you extracted from customers, not what you delivered to them.
What is the difference between a leading indicator and a diagnostic metric?
A leading indicator is a metric that predicts future movement in the North Star — it changes before the North Star changes, which gives you time to act. A diagnostic metric explains why a leading indicator moved. Leading indicators sit in the product health dashboard and are monitored daily. Diagnostic metrics sit in the funnel diagnostic dashboard and are investigated ad hoc, when a leading indicator moves in a direction that requires explanation.
How many metrics should be on the company health dashboard?
The company health dashboard should typically contain between 6 and 10 metrics. The North Star plus 3–5 leading indicators, MRR or ARR, and gross revenue retention. More than 10 metrics at the executive level creates the same problem a hierarchy is designed to solve: everything looks important, nothing drives a decision. The discipline is in what you exclude, not what you include.
What makes a metric hierarchy different from a metrics framework?
A metrics framework is a classification system — it groups metrics into categories like acquisition, activation, retention. A metric hierarchy is a causal structure — it maps which metrics predict which other metrics, and at which level of the organisation each metric is relevant. A framework helps you catalogue what to measure. A hierarchy tells you how the measures relate to each other and who should be looking at what, when.
Build the hierarchy before you build the dashboards.
The Product Analytics for B2B SaaS cohort walks through the metric hierarchy design process against your own product data — and produces the three dashboards you will actually use to run your product.