Skip to content
Analytics

Your Analytics Dashboard Looks Fine. Your Data Is Broken.

Dashboards show you what is being tracked. They do not show what is missing, what is double-counting, or where tracking silently stopped. Six failure modes hide in plain sight — and none of them trigger an alert.

Jake McMahon Jake McMahon Published March 30, 2026 7 min read

TL;DR

  • A working dashboard is not evidence of a correct implementation. It only shows what is being tracked — not what is wrong with it.
  • Napkyn and Kissmetrics separately estimated that 81% of analytics implementations contain errors. Most teams do not know which category they are in.
  • The six most common failure modes are double-firing, silent event death, identity stitching failure, broken attribution, misconfigured consent mode, and event taxonomy drift.
  • None of these produce an error message. The dashboard keeps loading. The numbers keep appearing. The decisions keep being made on bad data.

The problem with dashboards that look fine

Your analytics dashboard has numbers in it. Charts are rendering. Funnels are showing conversion steps. Someone in your last sprint review pulled up the activation rate and pointed at a trend line. Everything looks like it is working.

But dashboards are display tools, not validation tools. They render whatever data is in them — accurate or not, complete or not, double-counted or not. A pageview chart with 140% inflation from double-firing looks identical to one without it. An activation funnel missing the step where tracking silently broke looks like a very bad conversion rate, not a broken instrument.

Napkyn and Kissmetrics independently arrived at the same estimate: approximately 81% of analytics implementations contain measurable errors. The majority of those errors are not edge cases or misconfigured niche events — they are systematic problems in the core instrumentation that the team uses to make product decisions every week.

What follows is a breakdown of the six failure modes that account for most of those errors — what they look like, why they are invisible in a working dashboard, and what they are doing to your decisions in the meantime.

The six failure modes

Failure Mode 01Double-firing

Double-firing occurs when the same event fires twice in a single user interaction. The most common cause: a hardcoded analytics snippet in the page source and a tag manager trigger both fire the same pageview event. The result is two identical events per session where there should be one.

What this looks like in the dashboard: pageview counts that are roughly double the expected figure. Bounce rate that reads implausibly low (because a double-fired pageview makes a single-page session look like a two-page session). Session counts that feel inflated against your intuition about traffic.

Why it is invisible: the numbers are internally consistent. If every session has doubled pageviews, conversion rates calculated from pageviews as a denominator will still look directionally correct. The absolute numbers are wrong; the ratios may not be. Teams often accept this because "the trend is still useful" — but the trend is built on bad arithmetic.

When it happens: during migrations from one analytics tool to another, when a new snippet is added without removing the old one, or when multiple developers instrument the same feature at different layers of the stack.

Failure Mode 02Events that stopped firing after a product redesign

When a product is redesigned — new component library, refactored routing, updated DOM structure — event listeners that relied on CSS selectors or element IDs often break silently. The event was firing. Then the redesign shipped. Now it is not. Nobody noticed because no one was watching.

What this looks like in the dashboard: a drop in a specific event that coincides with a release date. This is sometimes interpreted as a product regression rather than a tracking regression. The team investigates the feature. The feature is fine. The tracking is what broke.

The more dangerous version: the event breaks and the drop is gradual rather than cliff-like — for example, tracking that relies on a component that was slowly migrated over several sprints. The chart shows a slow decline that looks like organic churn or reduced engagement. Decisions get made. Roadmap priorities shift. The issue was a broken sensor, not a broken feature.

Why it is invisible: your analytics tool has no way to know what events it should be receiving. It only records what arrives. If a tracking plan exists, comparing expected events against received events will surface this. Most teams do not have a living tracking plan.

Failure Mode 03Identity stitching failure

Before a user signs up, they exist in your analytics tool as an anonymous visitor with a generated ID. At the moment they create an account, your analytics SDK should call an identify method that links the anonymous session to the new identified user profile. If that call is missing, misconfigured, or fires at the wrong moment in the flow, you get two separate user records for the same person.

What this does to your data: user counts are inflated. Funnel analysis that spans the sign-up boundary is broken — the pre-sign-up steps and post-sign-up steps belong to different phantom users. Activation calculations that rely on tracking the same user from first visit to first value moment produce incorrect results by design.

Why it matters in B2B: most B2B products have invited users — people who receive an invitation link, accept it, and start using the product without an independent discovery session. These users often have no anonymous session to stitch. But the problem also appears in reverse: when a user logs out and logs back in, or switches devices, the stitching logic may create another disconnected record. Group analytics (account-level tracking) compounds the issue further.

Failure Mode 04Broken attribution

UTM parameters — the utm_source, utm_medium, and utm_campaign values appended to URLs in your marketing links — tell your analytics tool where a user came from. They work correctly until a redirect strips them.

The common scenario: a user clicks a UTM-tagged link from a marketing email. The link goes to marketing.yourproduct.com, which redirects to app.yourproduct.com. The redirect does not preserve query parameters. The UTM values are lost. The session lands in your analytics tool with no source attribution. Over time, your "direct" traffic bucket fills with campaigns that were never actually direct — they were attributed, then broken.

This is not a minor rounding error. For teams running paid campaigns or content marketing, it means the investment-to-conversion analysis is structurally wrong. Budget decisions are being made on data that consistently undercounts paid and organic channel performance in favour of "direct."

Other attribution break points: social sharing (when users share a UTM-tagged URL and the platform strips parameters), link shorteners that do not preserve parameters, and any JavaScript framework that rewrites the URL on initial load.

Failure Mode 05Consent mode misconfiguration

Consent mode — the mechanism that modifies analytics behaviour based on whether a user has accepted or declined tracking — introduces a silent data gap when misconfigured. If consent mode is set up so that analytics events are not fired at all for users who decline consent (rather than being sent in a privacy-preserving aggregate form), you lose visibility into a portion of your user base without knowing how large that portion is.

The estimated gap ranges from 10% to 30% of sessions in markets with high consent decline rates — notably the EU, where GDPR and active browser privacy defaults produce materially different consent patterns than North American traffic.

Why this is a product problem, not just a compliance problem: if your product serves European users at higher rates in specific segments — enterprise buyers, technical users, privacy-aware personas — consent mode misconfiguration will systematically undercount those segments. The resulting data will suggest those segments are smaller or less active than they are. Decisions about localisation, pricing, and feature priority can all be distorted.

Failure Mode 06Event taxonomy drift

Taxonomy drift is the accumulation of different event names for the same user action across time and teams. A button click that matters to the product may have been tracked when it was first built, again when a new team member added analytics to the same feature, and again when a contractor implemented a growth experiment on the same flow. Three events. One action. No coordination.

What this produces: any query for that user action returns only a fraction of the real data unless you happen to know all three event names and union them manually. Cohort definitions that rely on a single event name are wrong. Funnels that include the step show a lower conversion rate than reality. Retention analyses that anchor on that action are measuring a subset.

Taxonomy drift is almost universal in teams that have grown past a single developer and never formalised a tracking plan. It compounds with every new feature, every new hire, and every sprint where analytics is an afterthought rather than a defined step.

81%

Estimated proportion of analytics implementations containing measurable errors, based on independent research from Napkyn and Kissmetrics. The majority of those errors affect core instrumentation, not edge-case events.

Why these failures persist

Each of the six failure modes above has one thing in common: it produces no error message. The analytics tool keeps receiving data. The dashboard keeps loading. The charts keep updating. The only signal that something is wrong is a number that requires prior knowledge of what the right number should be — and that knowledge is exactly what teams rely on analytics to provide.

This creates a closed loop. The data looks plausible. It confirms or challenges hypotheses at a rate that feels credible. Decisions get made, features get prioritised, and roadmaps get built on numbers that have been systematically wrong since the implementation was first shipped.

The teams most exposed to this are not those with no analytics — they at least know they are flying blind. The most exposed teams are those with confident analytics: clean-looking dashboards, regular reviews, and a PM who trusts the numbers because they have always looked reasonable.

The question is not whether your analytics has errors. The question is which ones, how large, and what decisions they have already influenced.

What an audit actually looks at

An implementation audit is not a dashboard review. It works at the instrumentation layer — the actual event stream — and compares what is arriving against what should be arriving. The six failure modes above each have a corresponding diagnostic approach:

Failure mode Diagnostic
Double-firing Compare event counts against session counts; inspect network tab for duplicate calls in a single interaction
Silent event death Compare current event volume against a baseline from before the last major release; check tracking plan coverage
Identity stitching failure Audit identify() call placement in auth flows; compare anonymous user count against sign-up event count
Broken attribution Trace UTM parameters through every redirect in the acquisition path; check "direct" traffic proportion over time
Consent mode misconfiguration Compare session counts with consent accepted vs declined; verify event firing behaviour in each consent state
Taxonomy drift Pull all event names from the last 90 days; group by semantic action; identify duplicates

The output of an audit is not a grade. It is a prioritised list of which errors exist, what their estimated impact on key metrics is, and what the fix for each looks like. The most actionable audits identify the two or three errors that are distorting the decisions the team makes most often — usually funnel analysis, activation rate, and retention cohorts.

Cohort program

Product Analytics for B2B SaaS

In week one of the cohort, you audit your own product's implementation — real data, real findings. You leave with a prioritised list of errors, a corrected tracking plan, and a working framework for keeping the implementation clean as the product evolves.

Frequently asked questions

How do I know if my analytics implementation has errors?

The most reliable signal is an implementation audit — comparing what your tracking plan specifies against what is actually firing, checking for duplicate events, validating identity stitching across anonymous and identified sessions, and reviewing your consent mode configuration. A working dashboard is not evidence of a correct implementation.

What is double-firing in analytics?

Double-firing occurs when the same event is sent to your analytics tool twice — typically because a hardcoded tracking snippet and a tag manager both fire the same event. The result is inflated event counts and artificially low bounce rate metrics. It is common after migrations or when multiple developers instrument the same feature at different times.

What is identity stitching failure?

Identity stitching connects the anonymous session a user had before signing up with their identified user record after they sign up. If this is misconfigured, the same person is counted as two users — one anonymous, one identified. This inflates user counts and breaks funnel analysis that spans the sign-up boundary.

What is event taxonomy drift?

Taxonomy drift occurs when the same user action accumulates multiple event names over time — usually because different developers or teams instrumented it independently without checking what already existed. The result is queries that return partial data unless you know to combine all the variants. It is one of the most common and least visible analytics problems in growing teams.

Jake McMahon

About the Author

Jake McMahon writes about analytics architecture, product instrumentation, and the decisions B2B SaaS teams make when building their data foundations. ProductQuant helps teams design what to instrument, set it up correctly the first time, and connect analytics to decisions that affect revenue.

Next step

Your data is making decisions. Check what it is actually saying.

The Product Analytics for B2B SaaS cohort starts with a live audit of your own product's implementation. Week one finds the errors. The rest of the program teaches you how to build an analytics practice that does not accumulate them.