

10 Data Quality Checks to Run on Your MMP Every Month

Lakshith Dinesh
Updated on: Mar 16, 2026
The scariest MMP bug we have seen was not a crash. It was a dashboard that looked perfectly normal for 11 weeks while postbacks to Meta had silently stopped firing. The team kept making budget decisions. The data kept looking plausible. Nobody questioned it until a manual reconciliation revealed that Rs4 lakh in spend had been optimising against phantom signals.
This is why monthly MMP checks matter. Teams that run them catch configuration drift an average of 3-4 weeks earlier than teams that only investigate when something "looks off" in a campaign report. The issue is not carelessness. It is that MMP configurations are not static. SDK updates change event behaviour. Ad network API changes modify postback formats. Team turnover means no one is left to maintain settings from six months ago. Every change introduces silent data errors that look perfectly normal in a dashboard.
This post gives you 10 specific checks to run every month, what each one detects, and how to act on the results. Copy this checklist. Assign an owner. Run it on the first Monday of every month.
Why Monthly MMP Checks Matter
A weekly audit catches acute issues: a postback that stopped firing yesterday, a campaign that suddenly shows zero installs. The weekly attribution audit checklist covers this fast-response layer.
Monthly checks catch gradual drift. Settings that were correct at launch degrade over time due to SDK updates, ad network API changes, new campaign types, or team turnover. These issues do not break anything visibly. They shift data accuracy by 5-15% in ways that compound over weeks. Monthly checks are the prevention layer that catches these before they become crises.
Check 1: SDK Version Consistency Across App Versions
What to verify: Confirm that the latest MMP SDK version is deployed on all live app builds. Check both iOS and Android, and verify that no older app version still in circulation is running a deprecated SDK.
Why it matters: MMP SDKs are updated to fix bugs, support new ad network requirements, maintain privacy compliance, and improve event accuracy. When an older app version runs an outdated SDK, the data from those users may be incomplete, incorrectly formatted, or missing entirely. If 20% of your user base is on an older app build with a deprecated SDK, 20% of your attribution data is potentially compromised.
How to check: Pull the current SDK version from your MMP documentation. Compare it against the SDK version deployed in your latest iOS and Android app builds. If any live build runs an SDK version more than two major versions behind, flag it for engineering to update in the next release cycle.
Check 2: Postback Health Across All Active Ad Networks
What to verify: Confirm that postbacks are actively firing and receiving acknowledgements from every ad network with live campaigns. Check that each network received at least one postback in the past 7 days.
Why it matters: Postbacks are the bridge between your MMP and ad network algorithms. When a postback breaks, the ad network stops receiving conversion signals and its optimisation algorithm goes blind. The MMP still tracks installs, but the network cannot learn which users are valuable. You continue spending, but the algorithm optimises toward volume instead of value.
For a detailed walkthrough of postback configuration and validation across Meta, Google, and TikTok, the postback setup guide covers each network step by step.
How to check: Go to your MMP's partner integration dashboard. For each active ad network, verify the last postback sent date and confirm acknowledgement status. If any network shows zero postbacks in the past 7 days despite having active campaigns, investigate immediately.
Platforms like Linkrunner include real-time postback monitoring that flags broken connections within hours rather than waiting for a monthly manual check.
Check 3: Organic vs Paid Install Ratio Trend
What to verify: Compare this month's organic/paid install split against the trailing 3-month average. Flag any shift greater than 10 percentage points.
Why it matters: The organic/paid ratio is the most sensitive early warning indicator for attribution configuration problems. When a postback breaks, attributed installs drop and the organic bucket inflates. When an attribution window changes, credit shifts between paid and organic. Neither of these changes produces an error message. They just quietly change the ratio.
How to check: Pull total installs, paid attributed installs, and organic installs for the current month and the preceding three months. Calculate the organic percentage for each. If the current month's organic share is more than 10 percentage points higher or lower than the 3-month average, something changed. Investigate postback health, attribution window settings, and any new ad network integrations or removals before accepting the number as genuine.
Check 4: Event Mapping Consistency (MMP Events vs Ad Network Events)
What to verify: Confirm that each key in-app event mapped to ad network postbacks still matches the event name and definition in your MMP. Catch any events that were renamed, deprecated, or added without updating the postback mapping.
Why it matters: Event mapping drift is the stealth killer. We have spent entire afternoons debugging what turned out to be a renamed event. The fix took 30 seconds. Finding it took four hours. A product team renames "purchase_complete" to "order_confirmed" without informing the growth team. The MMP updates the event name, but the postback to Meta still references the old name. Meta stops receiving purchase signals and cannot optimise for revenue.
For guidance on building event taxonomies that resist this kind of drift, the event taxonomy design guide covers naming conventions and change management processes.
How to check: Export the list of events configured in your MMP. Compare them against the events mapped in each ad network's postback configuration. Verify names match exactly. Pay special attention to revenue events, registration events, and any event that was modified in the past 60 days.
Check 5: Revenue Attribution Completeness
What to verify: Compare MMP-reported revenue against your backend or finance system's reported revenue for the same period. Flag any gap greater than 15%.
Why it matters: Revenue attribution is the foundation of ROAS calculation. If your MMP under-reports revenue by 20%, every ROAS figure in your dashboard is 20% too low. That means you are potentially killing campaigns that are actually profitable or under-investing in channels that are delivering positive returns.
For a systematic approach to diagnosing revenue and attribution gaps, the attribution discrepancy diagnostic guide covers 12 root causes including revenue-specific issues.
How to check: Pull total revenue from your MMP for the past 30 days. Pull total app revenue from your payment processor, backend database, or finance team for the same period. Calculate the gap. If the MMP reports less than 85% of backend revenue, investigate which revenue events are missing, delayed, or misconfigured. Common causes: server-side events not firing, currency mismatches, and subscription renewals not being tracked.
Check 6: Attribution Window Settings vs Current Campaign Mix
What to verify: Confirm that click-through and view-through attribution windows still align with your active channel strategy. Check whether any new channels have been added since the last window review.
Why it matters: Attribution windows should reflect actual user behaviour for each channel. A 7-day click window makes sense for Meta and Google Search. But if you added influencer campaigns, podcast sponsorships, or CTV ads since your last review, those channels typically have longer conversion cycles (14-30 days). Running them on a 7-day window means you are systematically under-attributing their contribution.
How to check: List every active ad channel. Next to each, note the current attribution window configured in your MMP. Compare against the recommended window based on that channel's typical conversion cycle. Adjust any mismatches. As a baseline: 7-day click and 1-day view for Meta/Google, 14-day click for influencer/affiliate, 1-day click for retargeting.
Check 7: Fraud Detection Rule Effectiveness
What to verify: Review fraud flagging rates by network and campaign for the past 30 days. Investigate any network with a 0% fraud flag rate.
Why it matters: A 0% fraud rate does not mean zero fraud. It usually means your fraud detection rules are too loose or not configured for that network. Every ad network has some level of invalid traffic. If your rules are not catching any of it, you are likely paying for fraudulent installs without knowing.
How to check: Pull fraud flagging rates per ad network. If any network with significant spend shows 0% fraud flags, review the fraud rules configured for that network. Check click-to-install time thresholds, device-farm detection, and click spam filters. Tighten rules incrementally and monitor the impact on install counts.
Check 8: SKAN Conversion Value Utilisation (iOS Only)
What to verify: Check that SKAN conversion value distribution is not clustering at zero or null. Confirm that fine-value and coarse-value schemas still map to your current monetisation events.
Why it matters: SKAN conversion values are your primary signal for iOS campaign optimisation post-ATT. If values cluster at zero, it means either the conversion value schema is misconfigured, the app update did not include the latest SKAN registration, or your monetisation events changed without updating the SKAN mapping.
For vertical-specific SKAN configuration strategies, the SKAN 4.0 configuration guide covers optimal setups for eCommerce, gaming, subscription, fintech, and lead gen apps.
How to check: Pull SKAN postback data for the past 30 days. Look at the distribution of conversion values. If more than 50% of postbacks have a zero or null value, investigate the schema. Verify that the conversion events still match your current in-app purchase or subscription flow. Test with a fresh iOS install on a physical device.
Check 9: Deep Link Routing Accuracy
What to verify: Test 5-10 active campaign links end-to-end. Confirm that each link routes the user to the correct in-app screen and that attribution credit is properly recorded.
Why it matters: Deep links degrade silently. An app update can change a screen's route without updating the deep link configuration. A new in-app browser behaviour can strip URL parameters. Any of these breaks the user experience or the attribution chain.
How to check: Select 5-10 active campaign links from your top-spending campaigns. Click each on a test device and verify the correct screen loads. Check your MMP dashboard to confirm the test was properly attributed to the correct campaign.
Check 10: Data Export and Webhook Integrity
What to verify: Confirm that scheduled data exports and webhooks are delivering complete, correctly formatted data. Validate row counts and schema against the previous month's export.
Why it matters: Corrupted or incomplete MMP data breaks every downstream report. Export failures can run unnoticed for weeks if nobody validates the data pipeline.
How to check: Pull the most recent scheduled export and count the rows. Compare against expected volume based on install counts. Verify all expected columns are present and data is formatted correctly (dates, currencies, event names). If you use webhooks, check the delivery log for any failed or dropped events in the past 30 days.
Building This Into a Monthly Routine
Running all 10 checks takes roughly 60-90 minutes. Here is how to structure it across the month.
Week 1 (first Monday): Infrastructure checks (30 minutes)
Check 1: SDK version consistency
Check 2: Postback health
Check 10: Data export integrity
Week 2: Attribution accuracy checks (30 minutes)Check 3: Organic/paid ratio
Check 5: Revenue attribution completeness
Check 6: Attribution window alignment
Week 3: Event and fraud checks (20 minutes)Check 4: Event mapping consistency
Check 7: Fraud detection rule effectiveness
Week 4: Platform-specific checks (20 minutes)Check 8: SKAN conversion value utilisation
Check 9: Deep link routing accuracy
Ownership: Checks 1-2 and 10 require coordination with engineering. Checks 3-9 can be run entirely by the growth or performance marketing team. Assign a single owner who coordinates across both groups.
Escalation criteria: Any failed check that affects revenue attribution (checks 2, 4, 5, 6) should be escalated for immediate fix. Infrastructure checks (1, 10) should be fixed within the current sprint. SKAN and deep link checks (8, 9) should be investigated within the week.
Frequently Asked Questions
How often should we audit MMP data quality beyond the monthly checks?
Run the weekly audit every Monday. If you ship a major app update, add an ad network, or change attribution settings, run the relevant checks immediately.
What is the most common MMP data quality issue that goes undetected?
Event mapping drift. Product teams rename in-app events without notifying the growth team, breaking postback mappings. Ad networks stop receiving signals they need for optimisation.
Should engineering or marketing own MMP data quality checks?
Marketing should own the routine and escalation criteria. Engineering should own fixes for checks 1, 2, and 10 (SDK, postback infrastructure, data exports). The monthly owner should be someone on the growth team with enough technical context to interpret results and coordinate fixes.
How do we validate MMP revenue data against backend systems?
Pull total attributed revenue from your MMP for 30 days. Pull total app revenue from your payment processor for the same period. Gaps under 10% are acceptable (timing, refunds). Gaps above 15% indicate a configuration issue: missing revenue event, currency mismatch, or server-side events not firing.
What tools can automate parts of the monthly MMP data audit?
Most MMPs provide API access to script postback health, organic/paid ratio, and revenue comparison into automated alerts. BI tools can flag anomalies via scheduled queries.
Building the Habit Before You Need It
Remember the team with 11 weeks of phantom postback data? They were not careless. They just did not have a monthly check in place. The dashboard never showed an error. It never will. That is why you schedule the check before you need it.
The difference between teams that trust their attribution data and teams that constantly question it is almost never the platform. It is the hygiene habit. The checklist is the practice that keeps the tool accurate.
Copy this checklist. Assign an owner. Block 90 minutes on the first Monday of every month. Start with check 2 (postback health). It catches the highest-impact issues. Then work through the rest.
If you want to reduce the manual overhead, request a demo from Linkrunner to see how built-in integration testing, real-time postback monitoring, and automated data validation cover several of these checks automatically.



