The Marketing Budget Reallocation Framework using Attribution Data

The reluctant pantry manager.
Lakshith DineshChristmas Hat

Lakshith Dinesh

Reading: 1 min

Updated on: Dec 26, 2025

You've got ₹200,000 allocated across Meta, Google, and TikTok this month. Campaign A shows 2,000 installs at ₹5 CPI. Campaign B shows 800 installs at ₹12 CPI. Your CEO asks which one's actually working. Your answer? "Let me pull the data and get back to you."

This is the budget allocation paralysis that plagues mobile marketing teams. Not because they lack data, but because they lack a decision framework that turns attribution signals into confident spend moves. Teams sit on underperforming campaigns for weeks because they're not sure when to kill them. They hesitate to scale winning campaigns because they don't trust the attribution accuracy. They test new channels without clear rules for when to double down or cut losses.

The result? Marketing budgets drift toward mediocrity. High-performing campaigns stay underfunded. Poor performers keep burning cash. And the weekly budget review becomes a guessing game instead of a data-driven reallocation routine.

This guide gives you the operating system mobile marketers actually need: a weekly decision framework that uses attribution data to confidently kill, scale, or test campaigns based on clear signals, not gut feel.

Why Most Teams Struggle with Budget Reallocation

The attribution data exists. Most MMPs provide install attribution, cost data, and downstream event tracking. Yet marketing teams still struggle to make weekly budget moves with confidence.

Here's what we see when auditing real-world setups across 50+ growth teams:

Signal overload without decision criteria. Teams track 15 different metrics but have no agreed-upon threshold for when a campaign gets killed versus scaled. One week, the decision is based on CPI. The next week, it's ROAS. The framework changes based on who's in the meeting.

Attribution lag creates decision paralysis. iOS attribution through SKAN can take 24-72 hours to surface. Downstream events (signups, purchases, revenue) take even longer. Teams wait for "complete data" that never arrives, missing the narrow window when budget moves actually matter.

Cohort visibility gaps. Most dashboards show aggregate campaign performance but hide the critical question: are users acquired this week behaving like users acquired last month? Without cohort comparison, teams can't tell if a campaign's declining performance is signal or noise.

Cross-channel comparison friction. Pulling ROAS data from Meta, Google, and TikTok into a unified view requires spreadsheet work. By the time the comparison is ready, the budget cycle has moved on.

The fix isn't more data. It's a repeatable framework that turns attribution signals into three clear decisions every week: kill, scale, or test.

The Weekly Budget Reallocation Framework

This framework runs on a seven-day cycle. Every Monday, you pull attribution data from the previous week and make three types of decisions based on clear thresholds. The goal isn't perfection, it's velocity. Better to reallocate 80% correctly than to wait for 100% certainty and never move budget at all.

Core Decision Categories

Kill decisions: Stop campaigns that fail minimum thresholds after sufficient data collection (typically 500-1,000 installs for statistical significance). Budget reallocated to existing winners or new tests.

Scale decisions: Increase spend on campaigns that consistently exceed target metrics across multiple cohorts. Budget pulled from killed campaigns or next month's allocation.

Test decisions: Launch new campaigns with fixed budgets and clear success criteria. Evaluated after hitting minimum install volume, then moved to kill or scale.

The key is separating these three decision types with different data requirements and different speed expectations.

Step 1: Establish Your Decision Thresholds

Before pulling any attribution data, define clear numeric thresholds for each decision type. These thresholds should reflect your unit economics, not arbitrary industry benchmarks.

Calculating Your Kill Threshold

Start with your maximum acceptable customer acquisition cost. If your target CAC is ₹15 and your average revenue per user in the first 30 days is ₹45, your minimum acceptable ROAS is 3:1.

Your kill threshold sits below this line. Example thresholds:

  • CPI threshold: ₹8 maximum (if your blended target CPI is ₹5)

  • D7 ROAS threshold: 1.5:1 minimum (if your target is 2.5:1)

  • Signup rate threshold: 25% minimum (if your target is 40%)

Set the kill threshold at 60-70% of your target metric. This creates a buffer that accounts for attribution lag and cohort maturity while still protecting against runaway losses.

Minimum data requirement for kill decisions: 500 installs or ₹2,500 spent, whichever comes first. Below this threshold, variance is too high for confident decisions.

Calculating Your Scale Threshold

Scale thresholds sit above your target metrics, creating a clear "winner" category that justifies increased spend.

Example scale thresholds:

  • CPI threshold: ₹3 maximum (if your target is ₹5)

  • D7 ROAS threshold: 3.5:1 minimum (if your target is 2.5:1)

  • Signup rate threshold: 55% minimum (if your target is 40%)

Set scale thresholds at 120-140% of your target. This ensures you're scaling campaigns that genuinely outperform, not just meet expectations.

Minimum data requirement for scale decisions: 1,000 installs across at least two weekly cohorts showing consistent performance. One lucky week isn't enough signal.

Calculating Your Test Threshold

Test campaigns get fixed budgets and time-limited evaluation windows. Define these upfront:

  • Test budget: ₹1,500-₹3,000 per campaign (enough to hit 500 installs at your target CPI)

  • Test duration: 7-14 days maximum

  • Success criteria: Must exceed kill threshold after spending full budget, otherwise eliminated

Testing isn't open-ended exploration. It's structured validation with clear decision gates.

Step 2: Pull Weekly Attribution Data with the Right Cuts

Every Monday morning, pull attribution data for campaigns that ran the previous week. The specific cuts matter because aggregate numbers hide the decision signals.

Essential Data Cuts for Budget Decisions

Campaign-level performance: Total spend, installs, CPI, and downstream events (signups, purchases, revenue) broken out by individual campaign. Not aggregated by channel.

Cohort comparison: Week-over-week cohort behaviour for campaigns that have been running multiple weeks. Are users acquired last week converting at similar rates to users acquired two weeks ago?

Creative-level breakdown: For campaigns with multiple creatives, which specific ad variations drive install quality? Poor creative performance often explains campaign-level underperformance.

Channel saturation signals: Are CPIs rising week-over-week on the same campaign settings? Rising acquisition costs indicate audience saturation, even if absolute ROAS still looks acceptable.

Most MMPs provide these cuts, but teams often look at the wrong dashboard view. You need campaign-to-revenue funnel visibility with weekly cohort splits, not just aggregated "last 30 days" snapshots.

Platforms like Linkrunner surface these cuts in a single dashboard view, eliminating spreadsheet work. But the framework works regardless of tooling if you commit to pulling the same cuts every Monday.

Step 3: Apply the Decision Framework

With thresholds defined and data pulled, apply the decision logic systematically.

Kill Decision Logic

Scenario 1: Campaign exceeds kill threshold on multiple metrics

  • Spent ₹3,000, generated 600 installs at ₹5 CPI (within threshold)

  • D7 ROAS: 0.8:1 (below 1.5:1 kill threshold)

  • Signup rate: 18% (below 25% kill threshold)

  • Decision: Kill immediately. Reallocate budget to existing scale candidates.

Scenario 2: Campaign exceeds kill threshold on one metric but meets targets on others

  • Spent ₹4,000, generated 500 installs at ₹8 CPI (at kill threshold)

  • D7 ROAS: 2.8:1 (above kill threshold, near target)

  • Signup rate: 42% (above kill threshold)

  • Decision: Reduce budget by 50%, monitor for one more week. High signup rate and ROAS suggest quality users despite elevated CPI. Possible audience saturation that may normalise with reduced spend.

Scenario 3: Campaign shows declining cohort performance

  • Week 1 cohort: D7 ROAS 3.2:1

  • Week 2 cohort: D7 ROAS 2.4:1

  • Week 3 cohort: D7 ROAS 1.6:1 (approaching kill threshold)

  • Decision: Kill or rebuild with different creative/audience. Consistent decline indicates creative fatigue or audience saturation. Don't wait for full failure.

Scale Decision Logic

Scenario 1: Campaign consistently exceeds scale thresholds

  • Running for 4 weeks, 2,500 total installs

  • Average CPI: ₹3.20 (below ₹3 scale threshold)

  • D7 ROAS across all cohorts: 3.8:1 to 4.2:1 (above 3.5:1 scale threshold)

  • Signup rate: 58% (above 55% scale threshold)

  • Decision: Increase budget by 50-100%. Monitor daily for 3 days to confirm CPIs don't spike with increased spend.

Scenario 2: Campaign exceeds scale thresholds but shows saturation signals

  • Week 1-3: CPI ₹2.80, D7 ROAS 4.1:1

  • Week 4: CPI ₹4.20, D7 ROAS 3.3:1 (still above kill threshold but declining)

  • Decision: Hold budget flat or increase by 20% maximum. Saturation is emerging. Prepare replacement creative or audience expansion rather than aggressive scaling.

Scenario 3: Campaign exceeds scale thresholds on one channel but not others

  • Meta Campaign A: ₹2.80 CPI, 4.2:1 D7 ROAS (scale candidate)

  • Google Campaign A (same targeting): ₹7.50 CPI, 1.8:1 D7 ROAS (near kill threshold)

  • Decision: Scale Meta aggressively, kill or restructure Google. Same targeting performing differently across channels indicates platform-specific optimisation issues, not audience problems.

Test Decision Logic

Scenario 1: Test campaign hits success criteria

  • Test budget: ₹2,000 over 10 days

  • Results: 450 installs at ₹4.40 CPI, D7 ROAS 2.6:1 (above kill threshold, approaching target)

  • Decision: Graduate to scale candidate with 30-day budget allocation. Monitor weekly for consistent performance.

Scenario 2: Test campaign fails minimum thresholds

  • Test budget: ₹2,000 over 10 days

  • Results: 320 installs at ₹6.25 CPI, D7 ROAS 1.2:1 (below kill threshold)

  • Decision: Kill immediately. No second chances for failed tests unless you're changing creative, audience, or messaging entirely.

Scenario 3: Test campaign shows mixed signals

  • Test budget: ₹2,000 over 10 days

  • Results: 400 installs at ₹5 CPI (at target), D7 ROAS 2.2:1 (below target but above kill threshold)

  • Decision: Extend test by one week with additional ₹1,000 budget. Mixed signals need one more cohort for confidence. If Week 2 doesn't improve, kill.

Step 4: Execute Budget Moves with Speed

The framework only works if decisions translate into actual budget changes within 24-48 hours. Attribution data decays in value quickly. Insights from Monday's review need to be live in campaigns by Wednesday.

Operationalising Kill Decisions

Pause campaigns immediately in ad platform. Don't just reduce budget, pause entirely. Partial budget cuts create ambiguous data that makes future decisions harder.

Document kill reasons in a shared tracker. Example: "Killed Meta Campaign [ID] on [date]. Reason: D7 ROAS 0.9:1 after ₹4,000 spend, below 1.5:1 kill threshold. Budget reallocated to Google Campaign [ID]."

This creates institutional memory. Three months later when someone asks why a campaign was killed, the decision logic is clear.

Operationalising Scale Decisions

Increase budgets gradually, even for clear winners. A 50% budget increase is safer than 200%, especially on Meta where algorithm re-learning can spike CPIs temporarily.

Monitor daily for 72 hours after budget increase. If CPIs spike above your scale threshold, roll back to previous budget level. Not all winning campaigns can absorb increased spend without performance degradation.

Set hard caps on scale moves. Never allocate more than 40% of total monthly budget to a single campaign, no matter how well it performs. Concentration risk is real. Accounts get paused, algorithms shift, and creative fatigues.

Operationalising Test Decisions

Run only 2-3 tests simultaneously. More tests create cognitive overload and budget fragmentation. Better to run fewer tests with proper budgets than scatter ₹500 across ten campaigns and never get statistical significance.

Set calendar reminders for test evaluation dates. Tests that run indefinitely without decision gates become zombie campaigns that waste budget.

Common Mistakes That Break the Framework

Even with clear thresholds and weekly reviews, teams make predictable errors that undermine budget reallocation discipline.

Mistake 1: Waiting for perfect attribution data. SKAN attribution lags. Downstream event tracking has delays. Teams wait for "complete" data and miss the reallocation window. Better to make directional moves with 80% data than wait for 100% certainty that arrives too late to matter.

Mistake 2: Ignoring cohort trends in favour of aggregate numbers. A campaign averaging 2.5:1 ROAS looks fine until you notice Week 1 was 4:1, Week 2 was 2.2:1, and Week 3 was 1.3:1. The trend is clear, but aggregate numbers hide it.

Mistake 3: Scaling campaigns without checking creative fatigue. High ROAS this week doesn't guarantee high ROAS next week if you're running the same creative to the same audience for six weeks straight. Frequency data and creative rotation discipline matter.

Mistake 4: Applying the same thresholds to brand new campaigns and mature campaigns. New campaigns need 3-5 days to clear algorithm learning phase. Killing them after 48 hours because Day 1 CPIs are high misses the learning curve. Apply kill thresholds only after minimum install volume.

Mistake 5: Moving budget without documenting decisions. Three months later, your team won't remember why Campaign X was killed or Campaign Y was scaled. Document every decision with specific metrics so future reviews can validate the framework accuracy.

How to Measure Framework Success

The framework itself needs measurement. Track these meta-metrics monthly to validate whether your reallocation discipline is improving outcomes:

Budget velocity: What percentage of your monthly marketing budget moved (killed, scaled, or reallocated) based on attribution data? Target: 20-40% monthly. Below 20% suggests paralysis. Above 40% suggests instability.

Decision latency: How many days between attribution data becoming available and budget changes going live? Target: 1-3 days maximum. Longer latency indicates process friction.

Kill accuracy: Of campaigns killed in Month 1, what percentage would have remained below kill thresholds if they'd been allowed to continue in Month 2? Check this by comparing cohort behaviour. Target: 80%+ accuracy. Lower accuracy suggests thresholds are too aggressive.

Scale accuracy: Of campaigns scaled in Month 1, what percentage maintained performance above scale thresholds in Month 2? Target: 70%+ accuracy. Lower accuracy suggests premature scaling or insufficient cohort data before scale decisions.

Blended metric improvement: Is your overall marketing efficiency improving quarter-over-quarter? Track blended CPI, blended ROAS, and blended D30 LTV across all campaigns. The framework should drive these metrics upward as you kill losers and scale winners systematically.

Tools and Resources You Need

The framework works with any MMP that provides campaign-level attribution and downstream event tracking. Minimum requirements:

Attribution data freshness: Daily updates for Android, 24-72 hour lag for iOS SKAN (unavoidable platform limitation)

Export capability: Ability to export raw attribution data for spreadsheet analysis if your dashboard doesn't provide weekly cohort cuts natively

Postback reliability: Accurate downstream event attribution (signups, purchases, revenue) sent back to ad platforms for algorithm optimisation

Multi-channel unification: Ability to pull Meta, Google, TikTok, and other channel data into unified dashboard for cross-channel comparison

If your current MMP makes these cuts difficult to access, you're adding friction to the framework that slows decision velocity. Platforms like Linkrunner are built specifically for this workflow: unified campaign intelligence showing click-to-revenue funnels with weekly cohort breakdowns, creative-level ROAS visibility, and automated postback optimisation that helps ad algorithms learn toward revenue-generating users, not just install volume. But the framework itself is tool-agnostic. The discipline matters more than the dashboard.

The First 30 Days: Implementation Roadmap

Week 1: Establish thresholds

  • Calculate your kill, scale, and test thresholds based on unit economics

  • Document thresholds in shared team resource

  • Pull last 30 days of attribution data and apply thresholds retroactively to validate they make sense

  • Adjust thresholds if needed based on historical data review

Week 2: First weekly review

  • Pull Monday attribution data using framework cuts

  • Apply kill/scale/test decision logic

  • Execute budget moves by Wednesday

  • Document all decisions in tracker

Week 3: Second weekly review

  • Repeat Monday review with previous week's data

  • Check if Week 1 scale decisions maintained performance (72-hour check)

  • Refine thresholds if early decisions feel too aggressive or too conservative

Week 4: Framework validation

  • Compare Week 2 cohort performance to Week 3 cohort performance for campaigns that were scaled

  • Check if Week 1 kill decisions would have remained below thresholds if continued (validation check)

  • Calculate decision latency: how long did budget moves take?

  • Adjust process friction points (approval workflows, dashboard access, etc.)

Day 30: Blended metric check

  • Calculate blended CPI, ROAS, and D30 LTV across all campaigns

  • Compare to Day 0 baseline

  • Framework should show directional improvement even after just 30 days

Frequently Asked Questions

What if my attribution data is unreliable or incomplete?

Start by measuring the gap between install attribution (typically 85-95% accurate for Android, 60-75% for iOS post-ATT) and downstream event attribution. If your downstream event attribution rate is below 60%, you have a measurement infrastructure problem that needs fixing before applying this framework.

For iOS specifically, SKAN 4.0 provides coarse-value and fine-value conversion data. Use coarse values for kill decisions (binary signal: converting vs not converting) and reserve fine-value optimisation for scale decisions once you have volume.

How do I handle campaigns with long conversion windows?

Adjust your evaluation timeline. If your product has 14-day free trials before purchase, use D21 ROAS instead of D7 ROAS for kill/scale thresholds. The framework adapts to your business model, but you need sufficient time for conversion behaviour to surface before making decisions.

For products with very long conversion windows (30+ days), use leading indicators like signup rate, Day 1 retention, or first session duration as early-signal proxies until revenue data matures.

Should I use different thresholds for different channels?

Yes, within reason. Google App Campaigns typically have higher CPIs but better intent signal (users searching for solutions). TikTok often has lower CPIs but requires more creative testing volume. Set channel-specific thresholds based on historical performance, but don't create so many threshold variations that the framework becomes unmanageable.

How do I handle seasonality in attribution data?

Track week-over-week cohort performance, not week-over-same-week-last-year. Seasonality affects all campaigns simultaneously. If your entire portfolio shows declining ROAS in December due to holiday competition, don't kill everything. Look for relative underperformers within the seasonal context.

What if I don't have enough budget to run meaningful tests?

Reduce your test budget requirement to the minimum statistically significant level for your product. For most apps, 300-500 installs provides directional signal. At ₹5 CPI, that's ₹1,500-₹2,500 per test. If your monthly budget is below ₹10,000 total, focus on optimising existing campaigns rather than running new tests.

How do I convince leadership to kill campaigns that are "breaking even"?

Breaking even isn't success, it's capital inefficiency. Every pound in a breaking-even campaign is a pound not allocated to a winning campaign. Frame kill decisions in opportunity cost terms: "This campaign generates 1.1:1 ROAS. Our best campaign generates 4:1 ROAS. By reallocating this budget, we expect to generate 4× more revenue from the same spend."

Putting It All Together

The marketing budget reallocation framework is simple in concept but requires operational discipline. Most teams have the attribution data. What they lack is the decision structure that turns data into confident weekly budget moves.

Define your thresholds based on unit economics. Pull the same attribution cuts every Monday. Apply kill/scale/test logic systematically. Execute budget changes within 48 hours. Measure framework accuracy monthly.

The teams that run this framework consistently see 25-40% improvement in blended ROAS within 90 days, not because they discover magical new channels, but because they systematically kill losers, scale winners, and test with discipline instead of hope.

If you'd like to operationalise this framework without building custom attribution dashboards and spreadsheet workflows, request a demo from Linkrunner. The platform is built specifically for this decision routine: unified campaign intelligence with weekly cohort visibility, creative-level ROAS breakdowns, and automated postback optimisation, all at ₹0.80 per install for the India market. But the framework works regardless of tooling. Start with clear thresholds and weekly discipline. The rest follows.

Empowering marketing teams to make better data driven decisions to accelerate app growth!

For support, email us at

Address: HustleHub Tech Park, sector 2, HSR Layout,
Bangalore, Karnataka 560102, India

Empowering marketing teams to make better data driven decisions to accelerate app growth!

For support, email us at

Address: HustleHub Tech Park, sector 2, HSR Layout,
Bangalore, Karnataka 560102, India