TL;DR

  • Feature flags let you toggle features on or off for specific users without redeploying code. You can roll out a feature to 10% of users, target specific cohorts (e.g., only Pro plan users), or kill a feature instantly if it breaks.
  • PostHog feature flags are free and unlimited on the free tier. No event count limits, no user limits. Over 90% of PostHog companies stay on the free tier.
  • The 3 use cases every product team needs: gradual rollouts (release to 10%50%100%), A/B experiments (randomly assign users to control vs. variant), and kill switches (instantly disable a broken feature without a code deploy).
  • Multivariate flags support more than 2 variants — useful for testing 3+ pricing tiers, onboarding flows, or UI variations simultaneously.
  • Flags integrate directly with PostHog experiments. When you create an experiment in PostHog, it automatically creates a feature flag behind the scenes. The flag controls who sees the variant; the experiment tracks the impact.
  • Flags without experiments are blind releases. You're changing the product for some users without measuring the impact. Always pair a flag with an experiment or at minimum a dashboard.

What Feature Flags Actually Do

A feature flag is a remote-controlled toggle for your code. Instead of shipping a feature to everyone at once, you wrap it in a flag and control who sees it:

if (posthog.isFeatureEnabled('new-onboarding-flow')) { showNewOnboarding() } else { showOldOnboarding() }

You can change who sees the flag without touching your code:

  • Percentage-based: show to 10% of all users
  • Property-based: show to users where plan === 'pro'
  • Cohort-based: show to users in a specific PostHog cohort
  • Group-based: show to specific organizations or teams

Feature flags separate deploying your code from releasing a new feature. This simple-sounding concept completely changes how you build and ship software, giving your team precise control over who sees what, and when — all without needing another deployment.

How to Create Your First Feature Flag

The safe feature flag rollout lifecycle
The Safe Rollout Lifecycle: From internal dogfooding to 100% general availability.

Step 1: Create the Flag in PostHog

  1. Click Feature Flags in the left nav
  2. Click New feature flag
  3. Give it a key (e.g., new-onboarding-flow)
  4. Set release conditions: Rollout percentage — start at 0% (nobody sees it yet)
  5. Save

Step 2: Add the Flag to Your Code

posthog.init('<api_key>') if (posthog.isFeatureEnabled('new-onboarding-flow')) { renderNewOnboarding() } else { renderOldOnboarding() }

Step 3: Gradually Increase Rollout

A typical rollout strategy looks like this:

DayRolloutPurpose
Day 10%Nobody sees it — verify it works in staging
Day 210%Internal dogfooding — catch obvious bugs
Day 425%Targeted betas for power users
Day 750%If metrics hold, expand
Day 14100%Full launch

If something breaks, drop back to 0% instantly — no code deploy needed. This fundamentally de-risks the entire development process, turning what used to be a high-stress event into a controlled, iterative rollout.

The 3 Use Cases Every Product Team Needs

1. Gradual Rollouts (Risk Reduction)

Scenario: You're shipping a redesigned pricing page. Instead of deploying it to all users at once (and risking a revenue-impacting mistake), you roll it out gradually.

Why this matters: If the new pricing page accidentally breaks the checkout flow, you catch it at 10% rollout (affecting 100 users) instead of at 100% (affecting 1,000 users). The cost difference is 10×. The most immediate win is reducing risk — forget the old "big bang" releases where a feature went live for everyone at once, followed by a frantic scramble to fix any surprise bugs.

2. A/B Experiments (Decision-Making)

Scenario: You want to test whether a new onboarding flow improves activation. Create a multivariate flag with two variants:

Flag: onboarding-variant - control (50%): old onboarding flow - new-flow (50%): new onboarding flow

Then create a PostHog experiment tied to this flag. The experiment tracks activation rate for both variants and tells you when one is statistically significantly better.

Why this matters: Without flags + experiments, every product change is a leap of faith. With them, every change is a test. You stop arguing about which design is better and start letting the data decide.

3. Kill Switches (Emergency Response)

Scenario: A feature you shipped last week starts causing errors for 20% of users.

Without a feature flag:

  1. Find the bug
  2. Fix it
  3. Deploy a new version
  4. Wait for the deploy to propagate

With a feature flag:

  1. Toggle the flag to 0%
  2. The feature is instantly disabled for everyone
  3. Fix the bug at your own pace
  4. Re-enable when ready

Why this matters: Mean time to recovery drops from hours (deploy cycle) to seconds (flag toggle). For revenue-impacting bugs, that's the difference between losing 100 customers and losing 1,000.

10×

Cost reduction from gradual rollouts: catching a bug at 10% rollout affects 10× fewer users than catching it at 100%. Mean time to recovery drops from hours to seconds with kill switches.

Multivariate Flags: Testing More Than 2 Variants

A standard flag has 2 values: true or false. A multivariate flag has 3+ values:

Flag: pricing-page-layout - control (33%): current layout - variant-a (33%): layout with pricing table first - variant-b (34%): layout with testimonials first

Use multivariate flags when:

  • Testing 3+ onboarding flows
  • Comparing multiple pricing page layouts
  • Experimenting with different CTA copy variants
  • Testing different free trial lengths (7, 14, 30 days)

Warning: Multivariate tests need larger sample sizes. Each variant gets a smaller slice of traffic, so it takes longer to reach statistical significance. Don't run 5-variant tests unless you have the traffic to support them.

Targeting: Who Sees the Flag

Percentage Rollout

Simple random assignment. "Show this flag to 25% of users."

Property-Based Targeting

Show the flag to users with specific properties:

  • plan === 'pro' — only Pro users
  • role === 'admin' — only admins
  • signup_source === 'organic' — only organic signups
posthog.identify('user_123', { plan: 'pro', role: 'admin', signup_source: 'organic' })

Cohort-Based Targeting

Show the flag to users in a specific PostHog cohort: "Users who completed activation in the last 30 days" or "Users who haven't used Feature X in 14 days."

Group-Based Targeting

Show the flag to specific organizations: group_type: 'organization', group_key: 'acme_corp'.

Flags + Experiments: The Full Workflow

PostHog feature flags and experiments workflow loop
The Growth OS Loop: Flags control visibility, Experiments measure impact.

The most powerful PostHog feature is the integration between flags and experiments:

  1. Create an experiment in PostHog → it automatically creates a feature flag
  2. Wrap your code in the flag check
  3. PostHog tracks the impact — activation rate, revenue, retention — for each variant
  4. PostHog tells you when one variant is statistically significantly better
  5. Roll out the winner by setting the flag to 100% for the winning variant

This is the Growth OS experimentation process: pre-registered hypothesis (built into the experiment), calculated sample size (PostHog calculates it), locked primary metric (you pick it), and pre-agreed stopping rule (PostHog tells you when significance is reached). For when to trust your A/B test results, see our statistical significance guide. Teams that want this workflow set up correctly from the start often work with a PostHog consulting specialist to ensure the event taxonomy and flag architecture are experiment-ready before any tests go live.

PostHog Flags vs. LaunchDarkly vs. Statsig

If you're evaluating feature flag platforms, here's how PostHog compares to the dedicated tools:

FeaturePostHogLaunchDarklyStatsig
Free tier Unlimited flags Limited (3 environments, 5 flags) Unlimited flags
Integrated analytics ✅ Built-in ❌ Requires separate tool ✅ Built-in
A/B testing ✅ Native experiments ❌ Requires separate tool ✅ Native experiments
Session replay ✅ Built-in ❌ Not available ❌ Not available
Open-source ✅ Yes ❌ No ❌ No
Self-hosting ✅ Yes ❌ No ❌ No
Starting price Free $50/mo Free tier available

The key difference: PostHog combines feature flags with product analytics, session replay, surveys, and A/B testing in one platform. LaunchDarkly and Statsig focus on flag management and experimentation but require separate tools for the rest of your analytics stack. If you want a unified platform, PostHog is the simpler choice. If you need enterprise-grade flag management with complex governance workflows, LaunchDarkly has more depth.

Common Feature Flag Mistakes

Mistake 1: Flags Without Measurement

Creating a flag to roll out a feature without measuring its impact is a blind release. You're changing the product for some users but don't know whether the change helped or hurt. Always pair a flag with an experiment or at minimum a dashboard.

Mistake 2: Too Many Active Flags

Each flag adds complexity. After 20+ active flags, nobody knows what's on for whom. Set a rule: every flag has an expiration date. When the feature is fully rolled out (100%), remove the flag and the dead code path.

Mistake 3: Flags That Can't Be Tested in Staging

If your flags only work in production, you can't QA them before rollout. Ensure your flag system works in staging/development environments so your team can test flag behavior before exposing real users.

Mistake 4: Not Using Feature Flags for Product Decisions

Feature flags aren't just an engineering tool — they're a product management tool. Product teams should use flags to validate hypotheses before committing to full builds. The best product teams use flags as a safety net: merge early, release gradually, measure rigorously.

FAQ

Are PostHog feature flags free?

Yes. Feature flags are unlimited on PostHog's free tier. There's no cap on the number of flags, the number of users targeted, or the number of evaluations.

Can I use PostHog feature flags without PostHog analytics?

Technically yes, but you'd be missing the main value. Flags without analytics mean you're releasing features blindly. The analytics tells you whether the feature improved or hurt your key metrics. PostHog combines product analytics, session replay, feature flags, A/B testing, and surveys in a single platform — unlike specialized tools like LaunchDarkly that focus only on feature management.

How do feature flags differ from A/B testing?

A feature flag controls who sees a feature. An A/B test measures whether that feature improves your key metric. In PostHog, experiments automatically create flags — so you get both capabilities together.

How many feature flags should I have?

As few as possible. Each flag adds maintenance overhead. A good rule: if a feature has been at 100% rollout for more than 2 weeks, remove the flag and clean up the dead code path.

What's a typical feature flag rollout strategy?

A typical rollout starts with internal dogfooding for your team, then a targeted beta for power users, then gradual percentage rollouts: 10%, 25%, 50%, and finally 100%. At each stage, monitor error rates, activation changes, and user feedback before proceeding.

Sources

Jake McMahon

About the Author

Jake McMahon builds growth infrastructure for B2B SaaS companies — analytics, experimentation, and predictive modeling that turns product data into revenue decisions. He has implemented PostHog feature flags and experiments across multiple engagements, helping teams ship faster with less risk. Book a diagnostic call to discuss your experimentation setup.

Next Step

Get Your Experimentation Setup Audited

We'll assess your current feature flags, experiment process, and rollout strategy — and tell you exactly what to fix first.