GA4 Audit Checklist: The 10 Checks That Stop “Broken Data”
- Marc Alexander
- Jan 5
- 11 min read

There’s a particular kind of modern anxiety that only analytics can deliver. The dashboard is up, the numbers are down, someone has already said “Are we sure GA4 isn’t just… wrong?”, and you can feel a meeting invite being drafted in real time.
GA4 is rarely “wrong”. It is, however, very good at faithfully reporting whatever you’ve actually implemented, including the bits you didn’t mean to implement.
This article is a publish-ready, human-language GA4 audit you can run in about an hour. It covers the ten checks that most often create inflated conversions, broken attribution, missing revenue, and reporting that looks like it’s been lightly shaken.
If you want the short version: you’re trying to answer two questions.
"Are we collecting the right data, once, with the context we need?"
"Does the session hold together from first click to conversion, especially when consent, cross-domain journeys, and third-party tools get involved?"
If you want the professional version: this Top 10 is the “front door” to my 38-point GA4 Audit at Metric Owl, where I run the full diagnostic across implementation, governance, consent and measurement resilience, then give you a prioritised fix roadmap (and, if you want, I can implement the fixes too).
Who this is for
This is for anyone who relies on GA4 to make decisions about marketing performance, lead quality, ecommerce revenue, or customer journeys.
You do not need to be technical to use it.
You do need a modest tolerance for the phrase “it depends”, because the internet is a complex ecosystem and it enjoys reminding us of that.
How to run this audit in 60 minutes without losing the will to live
Start by choosing two real journeys that matter commercially. If you’re lead gen, pick something like “view a service page then submit the main enquiry form” and “download a brochure or key asset”.
If you’re ecommerce, pick “view an item then purchase” and “begin checkout then abandon”. You’ll use these journeys as your test cases so you’re auditing reality, not theory.
Open GA4 in one tab. In another, open your tagging environment, ideally GTM Preview or Tag Assistant. Then do each journey once like a normal person.
Do not rage-click.
Do not open seventeen tabs.
Do not refresh the thank-you page to “double check”.
GA4 will dutifully track your chaos and then you will blame it for being accurate.
As you run each journey, you’re looking for evidence. Which events fired, how many times, and whether key details (parameters) were attached. You’re also watching whether attribution stays sensible from start to finish.
Capture screenshots as you go. If you ever end up in a meeting defending data, screenshots are the only form of comfort you’re allowed to bring.
When you finish, you’ll have a clear set of passes, failures, and unknowns. Unknowns are fine. Unknowns are honest. Unknowns are exactly what an audit is for.
Check 1: Duplicate tracking and duplicate conversions
If GA4 data is “too good to be true”, it often is. Duplicate tracking is the fastest way to corrupt almost every KPI without looking obviously broken.
You’ll see conversion rates that feel heroic, engagement that looks suspiciously enthusiastic, and event counts that can only be explained by a ghost with excellent Wi-Fi.
What “good” looks like is one page_view per page load and one conversion per genuine completion. If a person submits a form once, GA4 should not celebrate twice.
How to verify it is to run your key journey in GTM Preview or Tag Assistant and watch the event stream. If the same GA4 event fires twice for a single action, you have duplication.
If you see multiple GA4 configuration tags, multiple Google tags, or both GTM and a plugin firing GA4, you have duplication. If conversions spike immediately after a release, you very likely have duplication.
What usually causes it is hardcoded GA4 plus GTM, multiple GA4 config tags in GTM, overlapping triggers, plugins that “helpfully” track events you already track, or GA4 UI rules (like create/modify events) that duplicate existing logic.
What to do next is to decide where your source of truth lives and consolidate. Remove the duplicate implementation, fix trigger logic, then re-run the same journey once and confirm the event now fires exactly once. This single fix can make every downstream report instantly more trustworthy.
Check 2: Consent Mode v2 is present, but is it actually working
In 2026, consent is not an afterthought and it’s not just a legal checkbox. It changes what gets collected, what gets modelled, what appears in reports, and what can be activated in linked products.
A lot of sites have a consent banner. Fewer have a consent implementation that behaves consistently across regions and across the full user journey.
What “good” looks like is that consent signals are being received correctly and that behaviour changes predictably before and after a user grants consent.
You should be able to explain, in plain language, what happens for a user who does not consent, so stakeholders don’t interpret expected differences as “tracking is broken”.
How to verify it is to test an incognito session where you check behaviour before consent and after consent, across at least one full conversion journey.
If you run cross-domain journeys, you must test consent behaviour across the entire flow, not just the landing page. If your results vary depending on where the journey starts, you likely have inconsistent consent wiring.
What usually causes failure is a banner that does not properly update consent states for Google tags, inconsistent configuration across domains, or a mismatch between regional consent rules and how the tags behave.
What to do next is to document the intended consent behaviour, validate that your tags respect it end-to-end, and then align your reporting expectations accordingly.
If your business depends on paid media, this is also where you assess whether your measurement strategy is resilient to partial consent, rather than assuming the numbers will behave themselves out of politeness.
Check 3: GA4 thresholding is hiding data, and everyone thinks tracking has failed
Thresholding is one of GA4’s most misdiagnosed behaviours.
It can withhold parts of some reports and explorations to protect user privacy. The result can look like missing rows, incomplete breakdowns, or suspiciously sparse data in segments.
People then “fix tracking” for three weeks and end up with exactly the same problem, because the issue is reporting constraints rather than collection.
What “good” looks like is that you can recognise thresholding quickly, you know which types of analysis are affected, and you have an agreed approach for how the business should interpret those reports.
How to verify it is to look for the signs in explorations and fine-grained breakdowns, then widen the date range and compare. If more data appears when you broaden the query, or if the UI warns that data is withheld, treat it as a reporting condition, not a tagging defect.
What to do next is to change the analysis approach.
Sometimes that means using more aggregated dimensions.
Sometimes it means using BigQuery exports for analysis that the GA4 UI is not designed to display at that granularity.
The key is to avoid making big decisions based on a view of the data that is, quite literally, incomplete.
Check 4: Cross-domain measurement is splitting sessions
If a user journey touches more than one domain, you must treat cross-domain as a top-tier audit item.
When it breaks, sessions split, attribution resets, and conversions drift into Direct or Referral, which is GA4’s way of saying, “I’m not angry, I’m just disappointed.”
What “good” looks like is one coherent journey from start to conversion, even if the user crosses domains, moves into a checkout, hits an authentication step, or uses a booking engine.
Source and medium should remain sensible throughout.
How to verify it is to run the real journey end-to-end and then inspect acquisition and conversion path reporting. If you see your own domain as a referrer, or your payment/booking domain as the “source” of conversions, you almost certainly have session continuity problems.
What usually causes failure is inconsistent tagging across domains, missing cross-domain configuration, separate GA4 properties being used in the same customer journey, or implementation differences introduced by third-party platforms.
What to do next is to configure cross-domain measurement properly and retest. This is also where you confirm that consent behaviour does not break continuity mid-journey.
Check 5: Unwanted referrals are stealing your conversions
A lot of conversions are “stolen” in GA4 reports by intermediary domains that did not actually acquire your customer.
Payment providers, identity tools, embedded platforms, and assorted third-party services love turning up in acquisition reports like they’ve done all the hard work.
What “good” looks like is that those intermediary domains do not get credited for acquisition or conversions, and your conversions attribute to the genuine marketing source that initiated the journey.
How to verify it is to look at referrals in your acquisition reports and then trace conversion paths. If you see a third-party tool domain repeatedly showing up right before conversion, you likely have an unwanted referral problem.
What to do next is to configure unwanted referrals appropriately and then retest the conversion journey. This is one of the most straightforward fixes with a disproportionate impact on attribution clarity.
Check 6: UTMs and channel integrity have gone feral
Even if collection is perfect, acquisition data can become unusable through inconsistent UTMs, parameter stripping via redirects, internal links that include UTMs, or a medium taxonomy that has been developed collaboratively by ten different people with ten different ideas of what “Email” means.
What “good” looks like is consistent source and medium values, a manageable level of “Unassigned”, and channel groupings that reflect how your business actually operates.
How to verify it is to inspect your top sources and mediums over a meaningful period and look for obvious duplication and inconsistency.
If your email traffic is split across half a dozen variations, you do not have a tracking problem; you have a governance problem.
What to do next is to standardise UTMs, enforce the standard through process and tooling, fix redirect behaviour that strips parameters, and stop internal links from rewriting acquisition.
This is not glamorous work, but it’s the difference between marketing ROI and marketing folklore.
Check 7: Event design has drifted into chaos
GA4’s flexibility is a gift and a curse. Over time, properties often accumulate near-duplicate events, inconsistent naming, missing parameters, and events with unclear meaning.
Reporting then becomes an interpretive art form, and stakeholders stop trusting the data because they cannot confidently answer simple questions.
What “good” looks like is a small set of meaningful events with clear definitions, plus consistent parameters for the events you actually care about.
If an event represents a lead, it should always look like a lead, not like a lead on Tuesdays and an engagement signal on Thursdays.
How to verify it is to pick the handful of events that matter commercially and confirm the parameters you need are consistently attached.
For lead gen, you might need form identifiers, lead type, page context, and outcome. For ecommerce, you need item and transaction context.
What to do next is to define an event and parameter contract, rationalise event sprawl, and implement validation so future releases don’t quietly degrade the schema.
Check 8: Ecommerce tracking is incomplete, duplicated, or mis-sequenced
Ecommerce issues tend to show up as revenue mismatch, missing product-level detail, duplicated purchases, or a funnel that doesn’t reflect the real buying journey. Any of these can make trading decisions risky.
What “good” looks like is that purchases include a unique transaction identifier, item arrays are complete and consistent, and the major ecommerce events fire in a logical sequence.
Refunds and cancellations should be handled in a way that aligns with how the business recognises revenue.
How to verify it is to run a test transaction and then inspect the purchase event details. If aggregate revenue looks plausible but product reporting is nonsense, item array issues are often the cause. If purchases duplicate on refresh or on return-to-site flows, you have a de-duplication issue.
What to do next is to align to a consistent ecommerce schema, enforce required fields, and re-test the full journey, including edge cases like payment retries and post-purchase confirmation page revisits.
Check 9: Session settings and engagement mechanics are distorting trendlines
GA4’s session and engagement mechanics can be adjusted, and changes here can materially shift your trendlines.
That is not necessarily wrong, but it becomes a problem when those changes are not documented and the organisation interprets a measurement change as a performance change.
What “good” looks like is that session-related settings align to your business model, and any adjustments are documented as measurement changes with clear before-and-after expectations.
How to verify it is to review Admin settings and then sanity-check session and engagement patterns against real user behaviour.
If your audience frequently returns after short breaks, or if your site supports research-heavy journeys, your settings and interpretations must reflect that.
What to do next is to treat GA4 configuration changes like release notes. If you change how you measure, you should be able to explain the impact without requiring a support group.
Check 10: Integrations exist, but they’re not delivering value
Many teams link GA4 to Ads, set up BigQuery exports, or connect reporting tools and assume that’s “done”.
In practice, integrations frequently underdeliver because the wrong conversions are configured for optimisation, audiences are incomplete due to identity and consent conditions, or exports exist without a reporting plan.
What “good” looks like is that your linked products are connected correctly, key conversions are defined intentionally for their purpose, and downstream reporting is built around decisions rather than default templates.
How to verify it is to review your product links, confirm that the right events are marked as conversions for the right uses, and ensure that audiences and conversion imports behave as expected.
What to do next is to separate conversions used for bidding from conversions used for reporting and learning, then validate the end-to-end flow. A single event can serve multiple purposes, but it often shouldn’t.
How to prioritise fixes when several checks fail
If you uncover multiple failures, you prioritise based on what can invalidate the largest share of decision-making.
Duplicate tracking typically comes first because it contaminates most KPIs.
Cross-domain and unwanted referrals often come next because they corrupt attribution and conversion paths.
Consent issues can be both compliance-critical and commercially disruptive, so they often sit in the urgent category even when the visible symptom is “we’re missing some data”.
Thresholding is different; it requires a reporting strategy response rather than a tagging sprint.
A sensible next step is to translate findings into a simple severity narrative: what breaks revenue reporting, what breaks attribution, what creates compliance risk, and what creates reporting noise. That stops your backlog becoming an emotional support document and turns it into a plan.
Want this done properly, end-to-end? That’s exactly what I can do for you
This checklist will get you to clarity fast. It will also reveal, in most organisations, that the issues don’t exist in isolation.
Duplicates often coexist with messy event design.
Cross-domain problems often coexist with unwanted referrals.
Consent issues often coexist with attribution surprises.
The Top 10 is the triage; a full audit is the diagnosis and treatment plan.
My GA4 Audit is a structured 38-point diagnostic that covers implementation integrity, property configuration, consent and privacy behaviour, attribution resilience, ecommerce validity where relevant, event and parameter governance, and integration readiness.
The deliverable is not “a list of issues” you can file away. It’s a prioritised roadmap that tells you what to fix first, why it matters commercially, what evidence proves it’s fixed, and what to monitor so it doesn’t quietly break again.
If you want the lighter version, I offer a Mini audit that focuses on the highest-risk integrity checks and gives you a fast, practical remediation plan.
If you want the full version, the Full audit covers the complete measurement surface area and produces an implementation-ready fix backlog.
If you want the outcome rather than the homework, the Full+Fix option includes implementation support so the audit turns into clean data, not a beautifully written document that lives in a folder called “Analytics (Final) (Really Final)”.
If you’ve ever sat in a meeting where someone asked why GA4 doesn’t match “the other number”, and you felt your soul briefly leave your body, you already understand the value.
Clean data reduces debate, speeds decisions, and makes marketing performance measurable in a way that does not require faith.
Closing thought
GA4 doesn’t need to be mysterious. It needs to be audited, governed, and tested like any other business-critical system.
Run the two journeys, apply the ten checks, and you’ll know whether your data is decision-grade or merely decorative.
Stop wondering if your Google Analytics data is right. Get a professional audit, know for certain, fix what matters.
I'm surprisingly friendly, even when telling you your GA4 tracking's a disaster.
Related services:
Connect GA4 to BigQuery - Foundation for advanced dashboards with unsampled data
GA4 to Back Office Integration - Connect multiple systems for unified dashboards
Expert GA4 Setup - Ensure GA4 is configured properly before building dashboards



Comments