Attribution Discrepancy Troubleshooting: The Complete Diagnostic Guide

The reluctant pantry manager.
Lakshith DineshChristmas Hat

Lakshith Dinesh

Reading: 1 min

Updated on: Jan 8, 2026

You're staring at three different screens showing three different versions of reality. Your MMP says 2,847 installs from yesterday's Meta campaign. Meta Ads Manager shows 3,104 installs. Google Analytics reports 2,531 app opens. Your CFO wants to know which number to put in the board deck, and you're not sure how to answer.

This isn't a theoretical problem. It's the daily reality for mobile marketers running attribution across multiple platforms. The question isn't whether you'll see discrepancies—you will. The question is whether those discrepancies represent acceptable measurement variance or critical tracking failures that are quietly draining your marketing budget.

Most teams waste hours trying to force perfect alignment between platforms that were never designed to report identically. The better approach is systematic diagnosis: understanding which discrepancies are normal, which signal real problems, and how to fix the ones that actually matter.

The Discrepancy Problem: When Numbers Don't Match (and When That's Acceptable)

Attribution discrepancies occur when different measurement systems report different values for the same marketing activity. Your MMP tracks 1,000 installs from a campaign. Meta reports 1,200. Firebase Analytics shows 950. These mismatches create three problems beyond the obvious confusion.

First, they erode confidence in your measurement stack. If your tools can't agree on basic install counts, how can you trust the more complex metrics like ROAS or LTV? Second, they complicate budget allocation decisions. When you're trying to shift spend from underperforming campaigns to winners, conflicting data means you might be moving money in the wrong direction. Third, they create internal friction between teams. Your performance marketer optimises toward MMP data while your product team makes decisions based on Firebase, and nobody's looking at the same picture.

The root cause of most discrepancies isn't tool failure—it's fundamental differences in how platforms define, capture, and attribute events. Meta counts an install when their SDK fires an install event. Your MMP counts an install when it successfully attributes that event to a campaign click or view. Google Analytics counts an app open, which might happen hours or days after the actual install. These are measuring related but different things, so perfect alignment is impossible.

Understanding this distinction is critical. Some discrepancies indicate serious measurement problems: missing postbacks, broken SDK implementation, attribution window misconfiguration, or fraud. Others represent normal measurement variance that you should monitor but not obsess over. The diagnostic framework below helps you distinguish between the two.

Acceptable Variance: ±5-10% Is Normal (Why Perfect Matching Is Impossible)

Before you spend hours investigating every mismatch, establish baseline expectations. A 5-10% variance between your MMP and ad platform reporting is normal and expected. Here's why perfect matching is mathematically impossible across distributed measurement systems.

Timing Latency: Events don't arrive simultaneously across platforms. Your MMP receives install events via SDK callbacks. Ad networks receive them via postbacks. Analytics platforms capture them through their own SDKs. Network delays, device connectivity issues, and processing queues mean these events hit different systems at different times. An install that fires at 11:58 PM in your MMP might not reach Meta's servers until 12:02 AM, creating a date mismatch that appears as a discrepancy in daily reports.

Attribution Window Differences: Platforms use different default attribution windows. Meta typically uses a 7-day click and 1-day view window. Google Ads often uses 30-day click. Your MMP might be configured for 7-day click only. An install that occurs 8 days after a click will be counted as organic by your MMP but attributed by Meta. This isn't an error—it's a configuration difference that creates expected variance.

Device Permissions and SDK Initialisation: Not every install successfully fires all tracking events. Users who deny ATT permission, immediately background the app, or have network connectivity issues might be counted by one system but missed by another. If 5% of users don't grant tracking permission, you'd expect roughly a 5% discrepancy between platforms that require permission (MMP with IDFA tracking) and those that don't (Meta's aggregated reporting).

Bot and Fraud Filtering: Quality MMPs filter fraudulent installs—click spam, device farms, bot traffic. Ad networks typically count these as legitimate installs since they can't verify quality. If your MMP blocks 8% of installs as fraudulent, your ad network will report 8% more installs than your MMP. This discrepancy is actually a sign your fraud protection is working.

The practical threshold for investigation: if your MMP reports 90-110% of what your ad platform reports, that's within normal variance. If the gap exceeds ±15%, systematic diagnosis is needed. A 50% discrepancy almost always indicates a technical failure, not measurement variance.

Diagnostic Category 1: Attribution Window Mismatches (Click vs View, Custom Windows)

The most common cause of attribution discrepancies is attribution window misconfiguration. Different platforms use different default windows, and if you don't explicitly align them, you'll see systematic differences in install counts and attributed revenue.

Click Attribution Windows: This defines how long after a click an install can be attributed to that click. Meta's default is 7 days. Google Ads often uses 30 days. TikTok uses 7 days. If your MMP is configured for a 7-day click window but you're comparing against Google Ads data using their 30-day default, any install between day 8 and day 30 after a click will be organic in your MMP but attributed in Google Ads.

View Attribution Windows: View-through attribution counts installs from users who saw an ad but didn't click. Meta's default is 1 day. Your MMP might be set to 24 hours, or you might have disabled view-through attribution entirely. An install 36 hours after an ad view will be organic in a 1-day window but attributed in Meta's reporting, creating a discrepancy.

Platform-Specific Windows: SKAN (SKAdNetwork) introduces additional complexity. SKAN has built-in conversion windows that vary by campaign type—24 hours for some, 72 hours for others. You can't customise SKAN windows the way you can deterministic attribution windows. If you're running iOS campaigns, your MMP's SKAN attribution might not align with your Android campaigns using probabilistic attribution, creating platform-specific discrepancies.

Diagnostic Test: Pull a 7-day cohort of installs from both your MMP and ad platform, filtered to the same attribution window configuration. Export the data by day. If discrepancies are higher in the first 2 days but normalise by day 7, your attribution windows are misaligned. If discrepancies remain consistent across the full window, the issue is elsewhere.

Fix Protocol: Standardise attribution windows across all platforms. Most teams align on 7-day click and 1-day view because it balances attribution credit with conversion likelihood. Configure your MMP to match your ad networks' settings. Document these settings in your measurement spec so new team members don't accidentally change them. Check quarterly that platform defaults haven't changed—ad networks occasionally adjust their attribution window options.

Diagnostic Category 2: Event Mapping Errors (Event Names, Parameters, Timing)

Event mapping errors create discrepancies between what your MMP records and what your ad platforms receive via postback. These errors are particularly insidious because they often affect revenue events more than install counts, meaning your ROAS calculations can be completely wrong even when install counts look reasonable.

Event Name Mismatches: Your app fires a "Purchase_Complete" event. Your MMP is configured to send "purchase" in postbacks to Meta. Meta expects "Purchase" with a capital P. This case-sensitive mismatch means Meta never receives your revenue events, so their ROAS reporting shows zero revenue while your MMP shows healthy revenue. Always verify exact event name formatting—capitals, underscores, and spaces must match precisely.

Parameter Mismatches: Revenue events require specific parameters. Meta expects "value" and "currency" parameters. Google expects "value" and "currency" but also wants "quantity" for product-level tracking. If your SDK sends "amount" instead of "value", or "USD" as a string instead of the ISO currency code, the ad platform rejects the event or processes it incorrectly. Your MMP shows the event fired, but the ad platform never received usable data.

Timing Issues: Some events fire too early or too late relative to when the ad platform expects them. A "purchase" event that fires immediately when a user clicks "buy" might be rejected if payment processing fails a few seconds later. A better implementation fires the purchase event only after payment confirmation. Similarly, events fired before SDK initialisation completes won't be tracked. If 10% of users complete purchase flows before your SDK fully initialises, you'll see a systematic 10% revenue discrepancy.

Diagnostic Test: Log into your MMP's postback tester or event validation interface. Send a test purchase event with known values through your production SDK. Verify it appears in your MMP dashboard with correct event names and parameters. Then check if that test event appears in your ad platform's event manager. If the event shows in your MMP but not in the ad platform, you have a postback configuration error. If parameters are missing or wrong, you have a mapping error.

Fix Protocol: Create a standardised event taxonomy document that defines exact event names, required parameters, and data types for every event you track. Share this with your engineering team, your MMP support team, and your ad platform representatives. Test every event in a staging environment before pushing to production. Build validation into your SDK integration—if a purchase event is missing the "value" parameter, log an error rather than sending incomplete data.

Platforms like Linkrunner provide event validation dashboards that automatically flag events with missing parameters or formatting errors, reducing the time teams spend debugging why postbacks aren't arriving at ad networks correctly.

Diagnostic Category 3: Timezone and Currency Differences

Timezone mismatches create date-level discrepancies that are often mistaken for missing attribution. Currency mismatches create revenue discrepancies that break ROAS calculations. Both are simple to diagnose but surprisingly common.

Timezone Discrepancies: Your MMP is set to IST (Indian Standard Time, UTC+5:30). Meta Ads Manager defaults to your account timezone, which might be PST (UTC-8). An install at 11:45 PM IST on January 15th is 10:15 AM PST on January 15th. Your MMP counts it as a January 15th install. Meta counts it the same day, so far so good. But if that install happens at 2:30 AM IST on January 16th, it's still January 15th in PST. Now you have a date mismatch—your MMP shows an install on the 16th, Meta shows it on the 15th.

This creates systematic daily discrepancies that compound over time. When you compare yesterday's performance, you're actually comparing different 24-hour periods. The fix is simple in theory—align all platforms to the same timezone—but requires coordinated configuration across your MMP, ad accounts, analytics platforms, and internal reporting tools.

Currency Mismatches: More dangerous are currency conversion errors. Your app charges users in Indian Rupees (₹). Your MMP is configured to report in INR. Meta Ads Manager shows USD by default. If you're comparing revenue numbers directly without currency conversion, every calculation is wrong. A ₹1,000 purchase (roughly $12 USD) appears as $1,000 in Meta if you haven't configured currency correctly, making ROAS look 83x better than reality.

Even when currency conversion is enabled, exchange rate differences create variance. Your MMP might convert INR to USD using yesterday's exchange rate. Meta might use a weekly average. Google might use real-time rates. A 2-3% variance in ROAS calculations solely from exchange rate timing is normal for apps with multi-currency revenue.

Diagnostic Test: Pull install data for a single day from both your MMP and ad platform. Export with timestamp data visible. If you see installs clustered around midnight in one system but evenly distributed in another, you have a timezone mismatch. For currency issues, pull a revenue event with a known value (like a ₹999 subscription purchase) and verify it appears identically in your MMP and ad platform. If the values differ, check currency configuration.

Fix Protocol: Standardise on a single timezone for all reporting, preferably UTC or your primary business timezone (IST for India-based teams). Configure every tool—MMP, Meta, Google, TikTok, analytics platforms—to use the same timezone. Document this choice in your measurement spec. For currency, configure your MMP to send revenue to ad platforms in the currency those platforms expect (usually USD) rather than letting each platform do its own conversion. This way you control the exchange rate and timing used for all platforms.

Diagnostic Category 4: Organic Traffic Misattribution

Organic install misattribution creates discrepancies by incorrectly categorising paid installs as organic, or vice versa. This doesn't change total install counts but dramatically affects attributed performance, making paid campaigns look worse than reality and organic channels look better.

Click Spam and Last-Click Attribution: Click spam occurs when an attribution partner (often an ad network or affiliate) sends fake clicks just before organic installs occur. The spam click receives attribution credit, stealing credit from the actual marketing channel. Your MMP reports 1,000 installs from Campaign A. In reality, 300 of those were organic installs that happened to be preceded by spam clicks. Campaign A's true install count is 700, making its CPI 43% higher than reported.

Sophisticated fraud detection filters most click spam, but basic setups often miss it. The pattern to watch: if a traffic source shows impossibly fast click-to-install times (under 2 seconds), high install volumes with near-zero post-install engagement, or install spikes that don't correlate with actual campaign spend increases, click spam is likely stealing attribution credit.

Deep Link Attribution Issues: Users who click a deep link from an email, SMS, or WhatsApp message should be attributed to those channels. If deep linking is misconfigured, these installs appear as organic. A fintech app sends 10,000 referral links via WhatsApp. Users click those links, install the app, and complete signup. If the deep link parameters aren't properly captured by your MMP, all 10,000 installs are marked organic instead of "WhatsApp Referral", making your paid UA performance look worse and your organic performance look unrealistically strong.

Web-to-App Tracking Gaps: Users who visit your mobile website and then install your app represent a multi-touch journey. If your web SDK isn't integrated with your MMP, or if cookie tracking is blocked, these installs appear organic even though they were driven by web-based marketing spend. An eCommerce app spends ₹500,000 on Google Search ads driving web traffic. 30% of web visitors install the app within 7 days. Without proper web-to-app tracking, those 15,000 installs (costing ₹33 per install) are categorised as organic, making your Google Search campaigns look unprofitable when they're actually driving significant app growth.

Diagnostic Test: Compare your MMP's organic install count against your analytics platform's new user count for the same period, filtered to users with no prior sessions. If your MMP shows 40% organic installs but your analytics platform shows 25% truly new users, 15% of your "organic" installs are likely misattributed paid installs. Dig into install timing—organic installs typically show consistent daily patterns, while paid installs spike with campaign activity. If your "organic" installs spike on days when you increased paid spend, attribution is leaking.

Fix Protocol: Enable fraud detection with click-to-install time validation, device fingerprint analysis, and engagement quality checks. Configure organic install attribution with strict criteria—only count as organic if there's zero attributed click or view within your longest attribution window. Implement web-to-app tracking by integrating your MMP's web SDK on your mobile website and passing user identifiers across the web-to-app boundary. Test deep link attribution by clicking links from every channel you use (email, SMS, WhatsApp, influencer bios) and verifying they're correctly attributed in your MMP dashboard.

Diagnostic Category 5: Platform Reporting Delays (SKAN Latency, Postback Processing)

Timing delays create temporary discrepancies that resolve themselves within 24-72 hours but can panic teams who expect real-time alignment. Understanding which delays are normal helps you avoid wasting time investigating phantom problems.

SKAN Postback Delays: SKAdNetwork (SKAN) intentionally delays postbacks to protect user privacy. After an iOS install, Apple waits 24-72 hours before sending the SKAN postback to your MMP, sometimes longer depending on conversion value configuration. This means iOS campaign data in your MMP is always 1-3 days behind real-time ad platform reporting. If you compare today's Meta iOS campaign performance in Ads Manager (which shows real-time data) against your MMP's SKAN data (which reflects installs from 2-3 days ago), you'll see massive discrepancies that disappear when you compare the same calendar dates after the full SKAN delay period.

Postback Processing Queues: When your MMP receives an install event, it must process attribution, send postbacks to relevant ad platforms, update dashboards, and sync with analytics integrations. Large install volumes (10,000+ per hour) can create processing delays of 15 minutes to 2 hours. During this window, your ad platforms show more installs than your MMP because they're receiving postbacks before your MMP has fully processed and displayed the corresponding installs in your dashboard.

Ad Platform Aggregation Delays: Meta, Google, and TikTok don't update their dashboards instantly. They aggregate data in batches, typically every 15-30 minutes. A Meta campaign that drove 100 installs in the last 15 minutes won't show all 100 immediately. This creates short-term discrepancies between real-time MMP data and delayed ad platform reporting, particularly for fast-moving campaigns.

Analytics Platform Attribution Delays: Google Analytics 4 and similar platforms process app data asynchronously. An install might appear in your MMP within seconds but take 4-24 hours to appear in GA4, depending on data processing volume. Firebase Analytics typically shows data within 1-4 hours but can delay up to 24 hours during high-volume periods. If you're comparing same-day MMP data against GA4, you'll see discrepancies that resolve after waiting for full processing.

Diagnostic Test: Instead of comparing today's data across platforms, pull a 7-day cohort from one week ago and compare totals. If the discrepancy disappears or drops significantly (from 30% to 5%), you had a timing delay issue, not a measurement failure. For SKAN specifically, compare iOS install counts from 5 days ago against current SKAN postback data in your MMP. The numbers should align closely since all SKAN delays would have resolved by then.

Fix Protocol: Build reporting rhythms that account for processing delays. Don't make budget decisions based on same-day data comparisons. Wait until day +3 for deterministic attribution to stabilise and day +5 for SKAN data to fully arrive. Configure alerts based on 3-day trailing data, not real-time data. Document expected delay windows for each platform in your measurement spec so new team members understand why yesterday's numbers might not match today's view of yesterday.

The Troubleshooting Workflow: 20-Minute Diagnostic Checklist

When you spot a discrepancy that exceeds acceptable variance, follow this systematic diagnostic process to pinpoint the root cause in under 20 minutes.

Step 1: Define the Discrepancy (2 minutes) Write down the specific discrepancy you're investigating: "MMP shows 2,847 Meta installs on January 14th. Meta Ads Manager shows 3,104 installs on January 14th. Difference: +257 installs (9% variance)." Note the date range, platforms being compared, metric (installs vs events vs revenue), and percentage variance. This documentation helps you remember what you were troubleshooting if you need to escalate to support teams.

Step 2: Check Attribution Window Alignment (3 minutes) Open your MMP's attribution settings. Note your click window (typically 7 days) and view window (typically 1 day). Open Meta Ads Manager settings and verify Meta's attribution window matches. If Meta uses 7-day click/1-day view and your MMP matches, attribution windows are aligned. If they differ, note the difference and calculate expected variance. A 7-day MMP window compared to Meta's 28-day click window could create a 15-20% discrepancy favouring Meta.

Step 3: Verify Timezone Consistency (2 minutes) Check your MMP's dashboard timezone setting. Check Meta Ads Manager's timezone (usually found in account settings or by hovering over the date selector). If they differ by several hours, calculate whether the discrepancy matches a timezone shift. A 10.5-hour difference (IST to PST) means events near midnight create date-level mismatches. If timezones are aligned, move to the next step.

Step 4: Check Processing Delays (3 minutes) If you're investigating today's or yesterday's data, processing delays are likely. Change your date range to 5 days ago and recheck the discrepancy. For iOS campaigns specifically, check whether you're comparing SKAN data (delayed by 24-72 hours) against Meta's real-time reporting. If the discrepancy shrinks or disappears when you look at older data, you've identified a processing delay, not a measurement failure.

Step 5: Validate Event Mapping (4 minutes) If the discrepancy affects revenue events or post-install events (not just installs), check event mapping. Open your MMP's postback configuration screen. Verify event names sent to Meta match Meta's expected event names exactly (including capitalisation). Check that required parameters (value, currency) are included. If you spot mismatches, document them and proceed to Step 7.

Step 6: Investigate Organic Misattribution (3 minutes) Pull your MMP's organic install count for the date range you're investigating. Compare it to your analytics platform's new user count (filtered to users with no prior app sessions). If organic installs are >20% higher than truly new users in analytics, paid installs are leaking into organic. Check for click spam by filtering your MMP's attributed installs to those with click-to-install times under 5 seconds. High volumes of ultra-fast installs suggest click spam is stealing credit.

Step 7: Document and Fix (3 minutes) Based on the diagnostic steps above, you've identified one of five root causes: attribution window mismatch, timezone difference, processing delay, event mapping error, or organic misattribution. Document your finding in your measurement log with date, discrepancy amount, root cause, and fix action. If the root cause is a configuration error (windows, timezones, event names), fix it immediately and note when you expect corrected data to appear (usually 24-48 hours for deterministic attribution, 3-5 days for SKAN).

If you complete all seven steps and still can't identify the root cause, the discrepancy likely results from fraud filtering, bot removal, or platform-specific edge cases that require vendor support. Document what you've ruled out and contact your MMP support team with your diagnostic results to accelerate their troubleshooting.

When to Investigate vs When to Accept: Decision Matrix

Not every discrepancy deserves investigation. Use this decision matrix to prioritise troubleshooting efforts on discrepancies that actually matter while accepting normal measurement variance.

Investigate Immediately (High Priority): Discrepancies >20% for any metric. A 25% difference between MMP and ad platform install counts almost always indicates a technical failure, not measurement variance. Revenue discrepancies >15%. If your MMP shows ₹100,000 in attributed revenue but Meta shows ₹85,000, your ROAS calculations are wrong enough to drive bad budget decisions. Sudden discrepancy spikes. If your typical variance is 5% but suddenly jumps to 18% overnight, something broke—a postback configuration, an SDK integration, or a campaign setup.

Investigate Within 1 Week (Medium Priority): Consistent discrepancies of 12-18%. Not urgent, but worth understanding whether this represents normal variance for your specific setup or a subtle configuration issue. Campaign-specific discrepancies. If most campaigns show 5% variance but one specific campaign shows 20%, investigate that campaign's setup—it might have different attribution windows or tracking parameters. Platform-specific discrepancies. If Android campaigns show 5% variance but iOS campaigns show 15%, SKAN configuration might be misaligned.

Accept as Normal Variance (Low Priority): Discrepancies <10% that remain stable over time. If your MMP consistently reports 95% of what Meta reports, and that ratio doesn't fluctuate wildly, you have stable measurement with predictable variance. Discrepancies that disappear after 3-5 days. Processing delays and SKAN latency create temporary gaps that resolve automatically—don't waste time investigating them. Small absolute differences on low-volume campaigns. A campaign with 50 installs showing a 10-install difference (20% variance) is less concerning than a campaign with 10,000 installs showing the same 20% gap. Low volumes amplify percentage variance.

The practical decision rule: if a discrepancy would change your budget allocation decision (move spend between campaigns, pause a channel, or increase investment), investigate it. If the discrepancy is too small to influence action, accept it as measurement noise and move on.

Linkrunner's validation dashboard compares MMP data against ad platform reporting automatically, flagging discrepancies >10% with suggested root causes so troubleshooting starts with context, not guesswork. This reduces the diagnostic cycle from hours to minutes by eliminating the manual data comparison work.

Key Takeaways

Attribution discrepancies are inevitable when measuring across distributed systems with different technical architectures, but systematic diagnosis separates normal variance from critical failures. Most teams waste time trying to achieve perfect alignment instead of focusing on discrepancies that actually matter—those large enough to drive wrong budget decisions.

The five most common root causes—attribution window mismatches, event mapping errors, timezone differences, organic misattribution, and processing delays—account for 90% of attribution discrepancies. Each has a specific diagnostic test and fix protocol that resolves the issue within 1-3 days once identified.

The critical skill isn't eliminating all discrepancies but knowing which ones require investigation. A stable 5-8% variance is normal and expected. A 20% gap or sudden variance spike indicates something broke and needs fixing. Build your measurement operations around this distinction, using the 20-minute diagnostic checklist when discrepancies exceed your acceptable variance threshold.

Attribution accuracy matters most when making budget reallocation decisions. If a discrepancy is small enough that it wouldn't change which campaigns you scale or pause, accept it as measurement noise and focus on higher-impact work. Reserve detailed investigation for discrepancies that would materially affect your marketing strategy.

If you're spending hours each week manually comparing data across platforms or investigating discrepancies that turn out to be normal variance, modern MMPs like Linkrunner can operationalise this diagnostic workflow automatically. You get flagged when something needs attention, with context on likely root causes, rather than discovering problems days later when budget has already been misallocated. Request a demo from Linkrunner to see how automated validation dashboards reduce troubleshooting time while improving measurement confidence.

Empowering marketing teams to make better data driven decisions to accelerate app growth!

For support, email us at

Address: HustleHub Tech Park, sector 2, HSR Layout,
Bangalore, Karnataka 560102, India

Empowering marketing teams to make better data driven decisions to accelerate app growth!

For support, email us at

Address: HustleHub Tech Park, sector 2, HSR Layout,
Bangalore, Karnataka 560102, India