PRODUCTQUANT WORKBOOK
Growth Execution Toolkit
Operating cadence system for SaaS teams — quarterly planning, monthly monitoring
ProductQuant
productquant.dev
$199
Table of Contents
This toolkit provides the operational structure to translate strategy into consistent execution. It is built for B2B SaaS teams with traction who need to move from scattered work to a clear, measurable cadence. The frameworks address the core execution challenges observed in teams scaling from $1M to $10M ARR, where process debt becomes a primary limiter of growth.
1. Quarterly OKR Planning Worksheet
Translate annual strategy into focused quarterly objectives and key results.
2. Monthly Metrics Review Template
Structured agenda to diagnose performance and decide on corrective actions.
3. Growth Experiment Tracker
System to prioritize, run, and learn from product and marketing experiments.
4. Team Alignment Canvas
Visual map to ensure cross-functional understanding of priorities and dependencies.
5. Metric Definition & Ownership Matrix
Clarify what each metric means and who is accountable for its movement.
6. Decision Cadence Framework
Meeting rhythms and criteria for making strategic, tactical, and operational calls.
7. Execution Velocity Checklist
Diagnostic to identify and remove bottlenecks in your development cycle.
8. Retrospective & Learning Log
Capture institutional knowledge from successes and failures systematically.
01
Quarterly OKR Planning Worksheet
Translate annual strategy into focused quarterly objectives and key results. Avoid the common pitfall of creating goals that are either too vague to execute or too tactical to matter.
Strategic Context & Inputs
Effective OKRs start with clear inputs. A team of three, costing between $748K and $1M in Year 1, cannot afford misaligned direction. The hidden cost of misalignment is management overhead—10–20 hours per week of executive time spent resolving conflicts instead of driving growth. Before setting OKRs, ensure you have answered the foundational questions from your growth diagnosis: What is happening? What matters most next?
Use the Product DNA framework to ensure your objectives match your product's structural type. A compliance tool should not adopt OKRs designed for a collaboration tool. This mismatch is a primary cause of strategy failure. For example, a product with a strong open-source component like Novu (38.7K GitHub stars) must have objectives that incorporate that community flywheel, not treat it as an afterthought.
Objective ≠ Project. An objective is a directional outcome (e.g., "Become the preferred choice for mid-market engineering teams"). A project is a specific initiative to help achieve it. If your OKR list reads like a project backlog, you have skipped the strategic layer.
Q[ ] Strategic Theme: What is the single, overarching focus for this quarter? (e.g., "Improve retention in the $50K–$100K ARR segment")
Primary Constraint: What is the one bottleneck we must solve to make progress? (Team capacity? Market awareness? Product maturity?)
Key Inputs from Last Quarter: What did we learn that must inform this plan?
Objective & Key Result Formulation
Aim for 2–3 objectives per quarter. Each objective should have 2–3 key results (KRs). KRs must be measurable, time-bound, and owned. A common failure is setting KRs that are outputs (features shipped) rather than outcomes (changes in user behavior or business metrics). The table below illustrates the shift from output to outcome thinking, which is critical for driving actual growth.
Objective 1:
KR 1.1: (Metric) from (Baseline) to (Target) by (Date)
KR 1.2:
Objective 2:
KR 2.1:
Commitment vs. Aspirational OKRs
Not all OKRs are created equal. Commitment OKRs are promises to the business (e.g., hitting a revenue target) and require full resource allocation. Aspirational OKRs are stretch goals that explore new territory. Mixing them without clarity leads to missed commitments and strategic drift. A healthy ratio is 70% commitment, 30% aspirational. Label each objective accordingly to set proper expectations.
Commitment OKR
Goal: Achieve core business metric.
Success: 100% achievement expected.
Example: "Hit $550K MRR by quarter end."
Aspirational OKR
Goal: Explore new growth vector.
Success: 70% achievement is valuable.
Example: "Validate enterprise self-serve channel."
02
Monthly Metrics Review Template
Structured agenda to diagnose performance and decide on corrective actions. Move from passive dashboard viewing to active decision-making.
The Review Cadence & Attendees
This 60-minute meeting should occur within the first five business days of each month. Required attendees: Head of Product, Head of Engineering, Head of Growth/Marketing, and the CEO/Founder for companies under $10M ARR. The goal is not to report status, but to interpret signal, diagnose root causes, and assign actions. Without this discipline, data remains an artifact, not a tool. The financial cost of a misaligned leadership team attending an unproductive meeting is significant; at a blended rate of $200/hour, a poorly run monthly review wastes over $800 in direct cost plus the opportunity cost of delayed decisions.
Pre-Work
- Metrics dashboard updated with prior month data.
- Narrative summary prepared by data/analytics owner.
- Any known anomalies or data issues flagged.
- Experiment results from the past month compiled.
Outcome
- Clear diagnosis for any metric off-track.
- List of 1–3 corrective actions assigned.
- Decision on whether to adjust quarterly OKRs.
- One key learning added to the institutional log.
Metric Categories & Diagnostic Questions
Review metrics in this order: Growth, Engagement, Conversion, Retention. For each category, ask the diagnostic questions below. The example uses a developer tool like Novu, which has an open-source flywheel (38.7K GitHub stars) but activation complexity. This structured diagnosis prevents teams from jumping to solutions before understanding the problem's location in the user journey.
Beware of the dashboard trap. Clean data from an event taxonomy is step one. The Monthly Review is the system that turns that data into decisions. If you review metrics but no decisions follow, you have a meeting, not an operating system.
Action Tracking & Follow-through
The most critical part of the review is the action log. Each action must have a single owner, a clear due date (typically before the next review), and a success criterion. Without this, insights decay. The worksheet below formalizes the output of the meeting and serves as the pre-read for the next session, creating a closed-loop system.
This Month's Diagnosis (The 'Why' behind the numbers):
Corrective Actions (Owner, Due Date, Success Metric):
OKR Adjustment Needed? (Yes/No & Rationale):
Key Learning for Retrospective Log:
03
Growth Experiment Tracker
System to prioritize, run, and learn from product and marketing experiments. Bridge the gap between insight from metrics and validated learning.
Experiment Pipeline & Prioritization
Experiments are your mechanism for testing hypotheses derived from metric reviews. A common failure is running experiments that are too small to matter or too large to complete quickly. Use the ICE score (Impact, Confidence, Ease) to rank potential tests. Impact should be tied directly to a key result. Confidence is based on prior data or qualitative research. Ease is a function of required resources. The scoring system forces quantitative rigor on what is often a qualitative debate.
8.5
Impact
Estimated effect on primary KR if successful. Scale 1-10.
7.0
Confidence
Strength of evidence supporting the hypothesis. Scale 1-10.
6.0
Ease
Inverse of effort (1=very hard, 10=very easy).
| Hypothesis |
Metric Impacted |
ICE Score |
Status |
Owner |
| "Adding a one-click test notification in the dashboard will increase 7-day activation by 15%." |
Activation % |
8.2 |
Ready |
PM |
| "Changing the free tier member cap from 3 to 5 will increase team collaboration and upgrade intent." |
Free-to-Paid % |
6.5 |
Backlog |
Growth |
| "A dedicated docs page for 'Switching from SendGrid' will increase conversion from email-centric signups." |
PQL Conversion |
7.8 |
Running |
Content |
Prioritize for learning, not just results. An experiment with a moderate ICE score that tests a fundamental assumption about your user is often more valuable than a high-score test that optimizes a known lever. Balance your pipeline between optimization and discovery.
Experiment Design & Rigor
Poor experiment design leads to false positives and wasted cycles. Define success and guardrail metrics upfront. Success metrics measure the intended effect. Guardrail metrics ensure you don't inadvertently harm other parts of the business (e.g., increasing activation by annoying users, which would increase short-term churn). Determine required sample size and duration for statistical significance before launching.
Experiment ID: EXP-026-01
Hypothesis:
Variant & Control:
Success Metric & Target Lift:
Guardrail Metric & Tolerance:
Required Sample Size / Duration:
Experiment Log & Learning
Every experiment, successful or not, must produce a learning that informs future decisions. This log prevents teams from repeating the same tests or forgetting why a certain path was abandoned. It turns tacit knowledge into institutional knowledge. The final section of the worksheet captures this.
Result & Statistical Confidence:
Key Learning & Why it Happened:
Decision: Scale, Iterate, or Abandon?
04
Team Alignment Canvas
Visual map to ensure cross-functional understanding of priorities and dependencies. Surface misalignment before it creates execution drag.
Mapping Dependencies & Handoffs
Growth work is inherently cross-functional. A growth marketer needs engineering support for tracking; a product manager needs sales input on pricing feedback. The 10–20 hours/week of management overhead often comes from resolving conflicts that arise from unclear dependencies. This canvas makes them explicit. It is based on the RACI model but simplified for speed, focusing on the critical handoffs that determine quarterly success.
Alignment is a verb, not a state. It is the output of a structured conversation about what each team needs from others to hit their goals. Update this canvas quarterly and whenever a major new initiative starts.
Workshop & Conflict Resolution
Use a 90-minute workshop at the start of each quarter to fill this out together. The rule: dependencies must be acknowledged by the providing team. If a dependency cannot be met, the goal must be adjusted. This forces realistic planning. The worksheet below captures the single most critical unresolved item from the workshop, creating immediate accountability.
Biggest Unresolved Dependency:
Owner for Resolution:
Date to Revisit:
Health of Collaboration Metrics
Beyond goals, track the health of collaboration itself. These leading indicators predict future alignment issues. Monitor them via brief surveys or retrospectives.
| Collaboration Metric |
Measurement Method |
Target |
Current Health |
| Dependency Met On-Time |
% of committed handoffs delivered as promised |
>90% |
Good |
| Cross-Team Feedback Quality |
Survey: "Feedback from other teams is actionable" |
Avg. score >4.0/5 |
Needs Work |
| Blame-Free Problem Solving |
Retrospective sentiment analysis |
Positive : Neutral ratio >2:1 |
Concerning |
05
Metric Definition & Ownership Matrix
Clarify what each metric means and who is accountable for its movement. Eliminate ambiguity about which metric matters for which decision.
Metric Specification & Data Source
A metric without a clear definition is worse than useless—it creates false confidence. This matrix forces precision on calculation, data source, and refresh cadence. It is the foundational layer that makes your Monthly Metrics Review possible. Based on the Event Taxonomy, ensure every metric here can be traced to a tracked event. Inconsistencies in definition, such as whether a "session" requires a pageview or a specific event, can lead to 20-30% variances in reported numbers, crippling decision-making.
| Metric |
Definition (Formula) |
Data Source |
Refresh |
Primary Owner |
| Product-Qualified Lead (PQL) |
A user who has performed [activation event] AND [usage threshold] within [time period]. |
PostHog (Events) |
Daily |
Growth Marketer |
| 7-Day Activation Rate |
% of signups who complete [first value moment] within 7 days of account creation. |
PostHog (Funnel) |
Weekly |
Product Manager |
| Net Revenue Retention (NRR) |
(Starting MRR + Expansion - Churn) / Starting MRR for a cohort over a trailing 12-month period. |
Stripe + CRM |
Monthly |
Head of Sales/CS |
| Weekly Active User (WAU) |
Count of unique users with ≥1 session in a 7-day rolling period. |
PostHog (Sessions) |
Weekly |
Product Manager |
| Free-to-Paid Conversion % |
# of paid subscriptions created / # of free accounts created in a period. |
Stripe & Auth0 |
Monthly |
Growth Marketer |
| Feature Adoption % |
% of active users who used a specific feature in the last 30 days. |
PostHog (Feature Flags) |
Monthly |
Product Manager |
Ownership is accountability, not just reporting. The Primary Owner is responsible for diagnosing movement in this metric and proposing actions to improve it. They are not just the person who updates the dashboard.
Health Scoring & Thresholds
Define what "good" and "bad" look like for each metric. This turns raw numbers into actionable signals. Use historical performance and industry benchmarks where available. For SaaS products, benchmarks vary by segment: a developer tool's activation rate will differ from a marketing automation platform's. Set thresholds based on your own historical 75th percentile (Green), median (Yellow), and 25th percentile (Red) performance.
Metric: [Metric Name]
Green Threshold (>):
Yellow Threshold (Caution):
Red Threshold (<):
Action if in Red for 2 consecutive periods:
Metric Interdependencies Map
Metrics influence each other. Optimizing one in isolation can harm another. This table maps common trade-offs to prevent local optimization at the expense of overall growth. Understanding these relationships is key to holistic decision-making in the Monthly Review.
| Primary Metric |
Positively Correlates With |
Negatively Correlates With (Risk) |
Monitoring Action |
| Activation Rate |
Long-term retention, NRR |
Short-term signup volume (if friction added) |
Watch signup conversion funnel. |
| PQL Volume |
Sales pipeline, top-of-funnel awareness |
PQL quality (if definition is too broad) |
Track PQL to Paid conversion rate. |
| Free-to-Paid % |
MRR growth, revenue efficiency |
Free user engagement (if free tier is too restrictive) |
Monitor free user WAU trend. |
06
Decision Cadence Framework
Meeting rhythms and criteria for making strategic, tactical, and operational calls. Separate noise from signal in team communication.
Meeting Hierarchy & Purpose
Too many meetings create drag; too few create chaos. This framework defines three tiers of meetings, each with a distinct purpose, attendee list, and decision authority. It is designed to minimize the 10–20 hours/week of reactive management overhead by making decision-making predictable. The cadence creates a rhythm of reflection (strategic), diagnosis (tactical), and execution (operational), which is the heartbeat of a growth operating system.
Strategic (Quarterly)
Purpose: Set/refine OKRs, review business model, allocate major resources.
Attendees: Leadership team.
Output: Approved OKRs, budget shifts.
Duration: 4-6 hours.
Tactical (Monthly)
Purpose: Monthly Metrics Review, adjust experiment pipeline, resolve cross-team blocks.
Attendees: Cross-functional leads.
Output: Action plan, reprioritized backlog.
Duration: 60 mins.
Operational (Weekly)
Purpose: Growth sync, experiment check-in, progress on actions.
Attendees: Core growth team.
Output: Updated experiment status, new hypotheses.
Duration: 30 mins.
Decision Rights & Escalation Path
Clarify who can make which call. A decision that should be made at the weekly sync but gets escalated to leadership kills velocity. Document common decision types and their designated authority. This table serves as a reference to reduce ambiguity and provide clear decision-making authority. The escalation path is not for approval, but for arbitration when the designated authority is stuck or the decision has cross-functional implications beyond their scope.
| Decision Type |
Example |
Authority |
Escalation Path |
Time to Decide |
| Experiment Go/No-Go |
Should we run the A/B test on pricing page copy? |
Growth Lead |
Monthly Review |
<2 days |
| Feature Scope Change |
Should we add an extra integration to the MVP? |
Product Manager |
Head of Product |
<3 days |
| Budget Reallocation (<$5K) |
Move ad spend from Channel A to Channel B. |
Growth Marketer |
CEO/CFO |
<1 day |
| OKR Adjustment |
Change a KR target mid-quarter due to market shift. |
Leadership Team |
N/A |
Monthly Review |
Default to action. If a decision is reversible and low-cost, the bias should be to test and learn, not to debate. Define what "reversible" means for your team (e.g., can be rolled back with a feature flag in <1 hour) to accelerate the operational cadence.
Meeting Health Diagnostic
Regularly assess the effectiveness of each meeting in the cadence. Poorly run meetings are a major source of drag. Use this simple diagnostic quarterly to identify and fix meeting anti-patterns.
Meeting: [e.g., Weekly Growth Sync]
Recurring Decision We Often Get Stuck On:
Proposed Owner/Authority:
Test This For (Time Period):
Meeting Health Score (1-5):
07
Execution Velocity Checklist
Diagnostic to identify and remove bottlenecks in your development cycle. Speed of learning is a competitive advantage.
Velocity Drivers & Blockers
Velocity is not just about engineering output. It is the speed at which a hypothesis moves from idea to validated learning. Slow velocity is often a systems problem— unclear requirements, slow feedback loops, or infrastructure debt. Use this checklist quarterly to diagnose and address the largest constraints. For a typical growth team, the cycle time from hypothesis to result should be measured in weeks, not months. A team costing $460K–$560K in annual salary that takes three months to validate a single hypothesis is a significant financial drain.
The cost of slow velocity is invisible. A team costing $460K–$560K in annual salary that moves slowly is a larger financial drain than a more expensive consultant who drives decisions weekly.
Clarity: Are experiment hypotheses and success criteria clearly defined before work starts?
Tooling: Do we have a reliable, fast way to deploy experiments and measure results (e.g., feature flags, analytics)?
Dependencies: Can the growth team ship most tests without blocking engineering resources?
Feedback Loops: How long does it take to get statistically significant results from an A/B test?
Debt: Is technical or design debt slowing down the modification of key user flows?
Process: Are there unnecessary approval gates or documentation requirements for small changes?
Scoring & Improvement Plan
Score each driver on a scale of 1–5. Focus improvement efforts on the lowest-scoring areas with the highest impact on key results. The radar visualization provides a snapshot of systemic health. Track these scores over time to measure the impact of your process improvements.
Cycle Time Benchmarking
Measure and benchmark your actual cycle times. This table provides realistic targets for a well-instrumented SaaS team. If your times are significantly longer, investigate the specific phase causing the delay.
| Phase |
Description |
Target Duration |
Your Current |
| Ideation to Design |
Hypothesis to wireframe/ spec ready for build |
<3 days |
|
| Build to Deploy |
Code start to feature -flagged deploy |
<5 days |
|
| Results to Decision |
Enough data collected to make a scale/iterate/ kill call |
<14 days |
|
Biggest Velocity Blocker (Score <3):
Proposed Solution (Test):
Success Metric for Solution:
Target Improvement in Cycle Time:
08
Retrospective & Learning Log
Capture institutional knowledge from successes and failures systematically. Prevent repeating mistakes and amplify what works.
Structured Retrospective Format
Hold a brief retrospective after each major experiment, project, or quarterly planning cycle. The goal is to extract reusable insights, not to assign blame. Use the format: What happened? What did we learn? What will we do differently next time? This log becomes a searchable knowledge base for the team. The discipline of logging turns individual memory into a company asset, mitigating the risk of key personnel turnover. For a team of three, the loss of one person can erase 30% of institutional knowledge if it's not documented.
| Date |
Initiative |
Outcome |
Key Learning |
Process Change |
Tags |
| Q1 '26 |
Pricing Page A/B Test |
No significant lift |
Traffic volume too low for signal; need aggregated tests. |
Batch small copy tests into larger experiments. |
pricing experiment |
| Q1 '26 |
Onboarding Flow Redesign |
Activation +10% |
The "one-click test" reduced perceived time-to-value. |
Apply "first value in <2 clicks" pattern to other flows. |
onboarding activation |
| Q4 '25 |
Self-Serve Enterprise Trial |
High signup, zero conversion |
Users needed security docs upfront, not after signup. |
Add gate with resource links before trial entry. |
enterprise conversion |
Learning is an asset. The knowledge that a certain type of test doesn't work for your product DNA, or that a specific onboarding moment drives retention, is as valuable as code. Systematize it.
Taxonomy & Searchability
For the log to be useful, it must be searchable. Develop a consistent tagging taxonomy based on your key metric areas, product modules, and experiment types. This allows new team members to quickly find relevant past learnings when starting a new project. The tag list should evolve with your product but start with core categories.
Metric Area Tags
activation retention conversion monetization acquisition
Experiment Type Tags
copy-test ui-change pricing onboarding feature
Learning Log Entry
Make logging learning a habit. The template below ensures consistency and actionability. Integrate this as the final step in your experiment tracker and monthly review process.
Date: [Date]
Initiative/Experiment:
Hypothesis Tested:
Outcome (Metric & Confidence):
Surprising Learning (Why?):
Decision/Process Change for Next Time:
Tags (e.g., onboarding pricing activation):
About the Author
Jake McMahon
B2B SaaS Product Strategist
Jake has spent 8+ years helping B2B SaaS companies turn product data into strategic decisions. With a background in Behavioural Psychology and Big Data, he specialises in competitive intelligence, product-led growth assessment, and pricing architecture. He has conducted over 200 product analyses across vertical SaaS, developer tools, and enterprise platforms — identifying the patterns that separate market leaders from the rest.
productquant.dev