

Your Marketing Team Is Tracking Too Many Events (And It's Hurting Your Ad Performance)

Lakshith Dinesh
Updated on: Mar 16, 2026
The most common event taxonomy advice in mobile marketing is wrong.
Teams tracking 40+ in-app events consistently have worse ad platform optimisation than teams tracking 8-12 well-chosen ones. More data does not equal better decisions. The problem is not missing information. It is signal dilution.
We once opened an app's postback config and counted 47 separate events being sent to Meta. Forty-seven. The algorithm was trying to optimise for everything, which meant it was optimising for nothing. Somewhere between the product analytics team wanting granular behavioural data and the marketing team wanting detailed funnel visibility, most apps end up sending 15-20 events as postbacks to ad platforms. But when those events arrive at Meta, Google, and TikTok, you are not giving the algorithms more information. You are giving them noise. This is one of the few areas in mobile marketing where less genuinely produces better results.
## The "Track Everything" Advice and Why It Backfires
The "track everything" philosophy originates from product analytics, where granular event data helps teams understand user behaviour, identify friction points, and measure feature adoption. In that context, more events is genuinely better. The cost of tracking an extra event in Mixpanel or Amplitude is negligible, and the data might be useful someday.
The problem starts when this philosophy is applied without modification to marketing attribution. Product analytics events and marketing attribution events serve fundamentally different purposes.
**Product analytics events** are consumed by humans looking at dashboards, running queries, and building cohort analyses. Humans can filter noise, ignore irrelevant events, and focus on what matters. More granularity helps humans find patterns.
**Marketing attribution events sent as postbacks** are consumed by machine learning algorithms that optimise ad delivery. Meta's algorithm, Google's UAC system, and TikTok's optimisation engine all use conversion postbacks to decide which users to target, how much to bid, and which creatives to show. These algorithms need clear, high-volume, consistent signals. When you send 20 different conversion events, the algorithm has to decide which ones matter, and it often decides wrong.
The result is suboptimal postback configuration that actively undermines the ad platforms' ability to find your best users.
## How Event Bloat Damages Ad Platform Optimisation
Meta's 50-conversion-per-week threshold is not a suggestion. It is how the algorithm works. Below it, you are paying learning-phase prices indefinitely.
If you are optimising for "purchase" and getting 80 purchase events per ad set per week, the algorithm has enough signal to exit learning phase. But if you split those same users across "add_to_cart," "initiate_checkout," "purchase," "purchase_confirmed," and "purchase_delivered," each event might only get 15-20 occurrences per ad set per week. None hits the threshold. Your CPA stays elevated indefinitely.
**Google UAC's tCPA learning period extends when conversion signals are noisy.** Google needs consistent conversion data to calibrate its bidding. When the conversion signal switches between events (sometimes "registration," sometimes "first_purchase," sometimes "tutorial_complete"), the algorithm cannot build a stable user profile for targeting. The learning period that should take 2-3 weeks stretches to 6-8 weeks, during which budget is spent inefficiently.
**TikTok's optimisation engine struggles with event hierarchy confusion.** TikTok allows you to set an optimisation event and send additional "reporting" events. But when teams send 15 events as postbacks without a clear hierarchy, TikTok's algorithm can weight lower-funnel events inconsistently. The result is targeting that oscillates between user profiles rather than converging on your ideal customer.
**Cross-platform compounding makes it worse.** If you are sending 15 events as postbacks to Meta and 15 to Google and 10 to TikTok, you now have three algorithms, each trying to learn from noisy signals independently. Meta might start optimising toward "add_to_cart" users while Google gravitates toward "registration_complete" users, creating divergent audience profiles across platforms. Your campaign performance looks inconsistent, but the root cause is not the creative or the targeting. It is the signal architecture.
**The observable pattern:** teams that audit their event setup and trim postback events from 20+ down to 8-10, while keeping the same events tracked internally for analytics, typically see a 15-25% improvement in CPA within 4-6 weeks. The spend does not change. The creative does not change. Only the signal clarity improves. That is free performance sitting in your postback configuration, waiting to be unlocked.
## The Event Hierarchy: Which Events to Track vs Which to Send as Postbacks
The solution is not tracking fewer events. It is separating what you track for internal analysis from what you send to ad platforms as optimisation signals.
**Track in your MMP: as many events as useful for internal analysis.** There is no harm in tracking 50 events in your attribution platform if your team uses that data for cohort analysis, funnel diagnostics, and product insights. This data stays internal and is consumed by humans who can handle complexity.
**Send as postbacks to ad platforms: only 3-5 high-signal events per platform.** These are the events that the algorithm will use to find and target similar users. They need to be high-volume, high-value, and clearly hierarchical.
The recommended hierarchy for most apps:
1. **Install** (always sent, baseline signal)
2. **Registration or signup** (first meaningful engagement)
3. **Key activation event** (the action that predicts retention: first lesson for edtech, first deposit for fintech, first add-to-cart for eCommerce)
4. **First purchase or revenue event** (first monetary value generated)
5. **Repeat purchase** (validation of user quality)
**Revenue events should be your primary optimisation signal whenever possible.** Optimising for installs tells the algorithm to find cheap users. Optimising for revenue tells it to find valuable users. The CPI will be higher, but the ROAS will be dramatically better.
This is a mindset shift that many teams resist. "But our CPI will go up!" Yes. And your cost per paying user will go down. The algorithm stops finding users who install and never open the app again, and starts finding users who spend money. Understanding which events actually predict LTV is the foundation for choosing the right postback events.
## How to Audit Your Current Event Setup
If you suspect event bloat is affecting your performance, run this five-step audit.
**Step 1: List every event currently tracked and every postback currently configured.** Export your full event list from your MMP. Then check each ad platform's postback configuration separately. Most teams discover events in their postback setup that nobody remembers adding.
**Step 2: For each postback event, check weekly volume per ad set.** Pull the last 4 weeks of conversion data per ad set for each postback event. If any event averages fewer than 50 conversions per ad set per week on Meta (or the equivalent threshold for Google and TikTok), it is likely hurting optimisation rather than helping it.
**Step 3: Identify redundant events.** Look for events that measure the same user action with different names: "add_to_cart" and "cart_updated," "purchase" and "order_confirmed," "signup" and "registration_complete." Redundant events split conversion credit and confuse algorithms.
**Step 4: Map each event to a decision it enables.** For every postback event, ask: "What budget or campaign decision would change based on this event's data?" If the answer is "none" or "it's just nice to know," the event should be tracked internally but removed from postbacks. This question alone usually eliminates 30-50% of postback events. Mid-funnel events like "viewed_product_details" or "opened_settings" are useful for product analytics but tell ad algorithms nothing meaningful about user quality.
**Step 5: Check for naming inconsistencies and duplicate tracking.** Event names like "Purchase," "purchase," "PURCHASE," and "in_app_purchase" may all exist in your setup, each counting as a separate event. Standardise naming across your MMP and ad platform configurations. This is more common than you would expect, especially in teams where multiple developers have added events over time without a shared naming convention. One fintech team we spoke to had three separate "KYC completed" events with slightly different names, each being sent as postbacks to Meta. The algorithm saw three low-volume events instead of one high-volume signal.
## The Lean Event Taxonomy: A Practical Framework
Here is a concrete framework for structuring your events, informed by the complete event taxonomy implementation guide.
**Core events (5-8 per app, tracked everywhere):** These are the events that define your user funnel. They are tracked in your MMP, sent as postbacks to ad platforms, and used in internal reporting. Examples: install, registration, activation milestone (varies by vertical), first purchase, revenue event with value. These events represent the meaningful steps in your user journey, from first touch to monetisation.
**Enrichment events (10-30 per app, tracked in MMP only, not sent as postbacks):** These provide analytical depth for your product and analytics teams but are not useful for ad algorithm optimisation. Examples: feature usage (opened settings, viewed profile), content engagement (article read, video watched), navigation events (screen views, tab switches), non-monetisation actions (wishlist adds, social shares). This is where "track everything" is perfectly fine. More granularity helps your product team understand behaviour. The important thing is that these events stay internal and do not get sent as postback signals.
**Postback events (sent to ad platforms, 3-5 maximum per platform):** Selected from your core events based on two criteria: sufficient weekly volume per ad set (the 50-per-week threshold on Meta) and clear business value (the event represents a meaningful conversion, not just engagement). These are the signals that train the ad algorithm to find your best users. If you get nothing else from this post, get this: the number of postback events is the single highest-leverage configuration decision in your attribution setup.
**Vertical-specific postback recommendations:**
Vertical | Postback Events |
Gaming | install, tutorial_complete, first_iap, revenue (with value) |
Fintech | install, kyc_complete, first_transaction, revenue (with value) |
eCommerce | install, add_to_cart, purchase, revenue (with value) |
Subscription | install, trial_start, subscription_purchase, renewal |
A well-designed taxonomy also follows naming and structural principles that prevent drift over time. The event taxonomy design guide covers how to build a structure that remains clean as your app and team grow.
## Cleaning Up Without Losing Historical Data
The biggest fear teams have about cleaning up events is losing data. You will not. When you remove an event from postback configuration, you are not deleting it. The event continues to be tracked internally in your MMP. Historical data remains intact. You are only changing what gets sent to ad platforms as an optimisation signal.
**The transition process matters.** Do not remove all excess postback events in one go. Start by removing the lowest-volume, least-relevant events first. Give each ad platform two to three weeks to recalibrate after each change. Monitor three signals during the transition: CPA trends (should gradually improve as algorithms recalibrate with cleaner data), consolidated conversion volume on remaining postback events (should increase as formerly split conversions consolidate), and ROAS (the ultimate measure of whether cleaner signals are helping).
Notify your ad platform account managers before making changes, and inform your analytics and product teams that internal tracking remains unchanged. Some teams find it useful to create a simple one-page document listing which events are tracked internally versus which are sent as postbacks per platform, so there is no confusion across functions.
If you launch a new product feature, evaluate it by the same criteria: sufficient weekly volume and clear business value. Do not add events reflexively. Platforms like Linkrunner make this separation explicit by letting teams track comprehensively for analytics whilst configuring a focused postback subset per platform.
## Frequently Asked Questions
**How many in-app events should I send as postbacks to Meta and Google?**
Three to five events per platform. Focus on events with high weekly volume (50+ per ad set on Meta) and clear business value. More dilutes the signal.
**Is it okay to track many events in my MMP as long as I limit postbacks?**
Yes. Tracking 30-50 events internally causes no harm. The dilution problem is specific to postback signals sent to ad platforms.
**How do I know if event bloat is hurting my performance?**
Check whether any postback events average fewer than 50 conversions per ad set per week on Meta. If multiple events fall below this threshold, your campaigns are stuck in learning phase.
**What happens to historical data if I stop tracking events?**
Historical data remains in your MMP's database. Stop postbacks first, evaluate impact, then decide whether to remove internal tracking.
**Should I send the same postback events to every platform?**
No. Meta and TikTok benefit from revenue-based optimisation. Google UAC responds well to a single, high-volume conversion event. Tailor per platform.
## Getting Your Signal Right
Event bloat is fixable. The audit takes a day. The postback cleanup takes an hour. The performance improvement typically shows within 2-4 weeks.
Start with the audit. List every postback event. Check volumes. Remove anything that does not hit thresholds or drive decisions. Then monitor for CPA improvement, consolidated conversion volume, and ROAS gains.
If you want a platform that makes the separation between analytics events and postback events explicit and easy to manage, Linkrunner is built for exactly this workflow. Request a demo today.
Fewer signals, chosen well, will always outperform more signals chosen carelessly. Your algorithm is only as smart as the data you feed it.



