What Is Incrementality Testing? A Practical Guide for Mobile App Marketers in 2026

Lakshith Dinesh

Lakshith Dinesh

Reading: 1 min

Updated on: Mar 16, 2026

You are spending Rs15 lakh a month across Meta and Google. Your MMP says both channels are delivering positive ROAS. But when you pause Meta for a week in Pune, total installs barely drop. That gap between what attribution reports and what actually happened is exactly what incrementality testing measures.

Incremental testing answers the question every growth marketer should be asking: "Would these users have converted anyway, even without my ads?" Attribution tells you who clicked what. Incrementality tells you what your money actually caused.

If you have scaled past Rs10 lakh monthly spend across three or more channels and your ROAS looks healthy but revenue does not scale proportionally when you increase budgets, this guide will walk you through what incrementality testing is, how it works, and how to design your first test without needing a data science team.

## What Is Incrementality Testing (And Why Attribution Alone Can't Answer the Real Question)?

Incremental testing measures the **causal lift** of a marketing activity. Instead of tracking which ad a user clicked before installing (that is attribution), incrementality asks: "How many of these installs would not have happened without this campaign?"

This is the fundamental question that attribution models, whether [single-touch or multi-touch](https://linkrunner.io/blog/multi-touch-attribution-vs-single-touch-which-model-actually-drives-better-decisions), cannot answer on their own. Attribution assigns credit. Incrementality proves causation.

Consider a branded Google UAC campaign. Your MMP reports it drove 3,000 installs last month at Rs25 CPI. Solid numbers. But a large portion of those users were already searching for your brand name. They would have found you in organic results. Attribution gives the campaign full credit. Incrementality would show that the true incremental contribution might be only 40% of the reported figure.

As one [contrarian perspective on last-click attribution](https://linkrunner.io/blog/why-last-click-attribution-is-actually-fine-for-most-growing-apps-(a-contrarian-view)) argues, simpler models work well for execution. But for strategic budget decisions at scale, you need to know what your spend is actually causing.

**Quick comparison: Attribution vs Incrementality**

Dimension

Attribution

Incrementality Testing

What it measures

Who gets credit for a conversion

Whether the conversion would have happened without the ad

Methodology

Click/view tracking, last-touch or multi-touch models

Controlled experiments (holdout, geo-lift, on/off)

Speed

Real-time or near real-time

Weeks to months per test cycle

Best for

Daily campaign optimisation, creative decisions

Strategic budget allocation, channel-level investment decisions

Limitation

Assumes correlation equals causation

Requires sufficient volume and test duration for significance

## How Incrementality Testing Works: The Core Methods

There is no single way to run an incrementality test. The right method depends on your spend level, channel mix, and how much disruption you can tolerate. Here are the four primary approaches.

**Geo-lift tests** are the most common method for mobile app teams. You select a set of regions (cities or states) where you pause or reduce spend on a specific channel, while keeping spend active in comparable control regions. After two to four weeks, you compare install and revenue trends between test and control regions to measure the true lift your spend created.

Geo-lift works well in India because city-tier segmentation gives you natural control groups. You might pause Meta spend in Hyderabad and Pune while keeping it active in Bangalore and Chennai, then compare outcomes.

**Holdout or ghost ads** involve serving non-branded or blank ads to a randomised control group while showing real ads to the test group. This is more precise than geo-lift but harder to implement at scale, and not all ad platforms support it natively.

**On/off tests** are the simplest approach. You pause a channel entirely for a defined period and measure total install volume. If you turn off Meta for two weeks and total installs drop by only 10% while Meta was reportedly driving 35% of installs, the difference is your cannibalization or overlap.

**Matched market testing** pairs similar cities or regions based on demographic and behavioural profiles, then applies different spend strategies to each pair. This gives you cleaner comparisons than broad geo-lift but requires more careful market selection. The upside is that matched markets control for population size, income levels, and app maturity, variables that broad geo-lift sometimes struggles to isolate. The downside is that finding truly comparable markets in India takes work: a Tier 1 city in the south does not behave like a Tier 1 city in the north, even at similar population levels.

Each method has minimum spend thresholds. Geo-lift and on/off tests typically need Rs10 lakh or more per month in the channel being tested to generate enough volume for statistically meaningful results. Below that, the noise overwhelms the signal.

## When Should You Start Running Incrementality Tests?

Incremental testing is not something every app team needs from day one. It becomes valuable at a specific stage of growth, and running it too early wastes time without generating actionable results.

**You are ready for incrementality testing when:**

- You are spending Rs10 lakh or more per month across three or more paid channels

- Your attribution dashboard shows positive ROAS across most channels, but scaling spend does not increase total revenue proportionally

- You suspect branded search campaigns are claiming credit for organic conversions

- Your CFO or finance team is asking for proof that marketing spend is generating genuinely new customers

- You have at least six months of consistent attribution data to establish baselines

**You are not ready for incrementality testing when:**

- You run campaigns on a single channel with less than Rs5 lakh monthly spend

- You are still in product-market fit stage and install volumes are below 5,000 per month

- Your attribution setup itself is not yet reliable (broken postbacks, inconsistent event tracking, no revenue data)

- You do not have at least two to four weeks of runway to dedicate to a test without needing immediate optimisation changes

The cost of running an incrementality test is not just the analytics effort. It is the opportunity cost of pausing or restricting spend in certain regions or channels during the test window. If your team is under pressure to hit monthly install targets, pausing Meta in three cities for four weeks will create a visible dip in reported numbers. Make sure your leadership understands why the dip is expected and what insight you are buying with it. The teams that run incrementality tests well treat the temporary performance dip as an investment in better long-term allocation, not as a failure.

## How to Design Your First Incrementality Test (Step by Step)

**Step 1: Choose the channel and campaign to test.** Start with the channel where you have the highest spend and the most suspicion that reported ROAS may be inflated. For most Indian app teams, this is either branded Google UAC or broad Meta campaigns.

**Step 2: Define control and test groups.** For a geo-lift test, select two to three cities as your test group (where you will pause spend) and two to three comparable cities as your control group (where spend continues unchanged). Match cities by population tier, historical install volume, and user behaviour patterns. In India, pairing Pune with Hyderabad or Jaipur with Lucknow often works well for Tier 1 and Tier 2 comparisons.

**Step 3: Set your test duration.** Run the test for a minimum of two weeks, ideally four. Shorter tests do not generate enough data points for statistical significance, especially if your daily install volume per city is under 200. Account for weekday/weekend variance by ensuring your test covers at least two full weeks.

**Step 4: Calculate sample size requirements.** You need enough conversions in both test and control groups to detect a meaningful lift. As a rough guide, if you expect a 20% incremental lift, you need approximately 400 conversions per group for 80% statistical power. If your expected lift is smaller (10%), you need four times that volume.

**Step 5: Define your success metrics.** Measure incremental installs (how many fewer installs happened when spend was paused), incremental revenue (how much less revenue was generated), and incremental ROAS (revenue delta divided by the spend that was paused). Do not just look at install volume. Revenue-based incrementality is what matters for budget decisions.

**Step 6: Lock the test conditions.** During the test period, do not change creatives, adjust bids, or launch new campaigns on the channel being tested. Any changes introduce confounding variables that invalidate results.

## Reading Incrementality Results Without Fooling Yourself

Running the test is the easy part. Interpreting results without drawing false conclusions is where most teams make mistakes.

**Statistical significance matters more than directional signals.** If pausing Meta in Pune led to 12% fewer installs but your sample size was only 150 installs per group, that result is not statistically significant. You cannot make budget decisions based on it. Use a basic significance calculator (a chi-squared or two-proportion z-test works) to validate before acting.

**Watch for seasonality traps.** Do not run incrementality tests during festivals (Diwali, New Year), major app launches, or IPL season. And yes, this means you cannot run a clean test during IPL season. We have watched teams try. The data is unusable. Seasonal traffic spikes create noise that distorts your control group baselines. The best windows for testing in India are typically January through March and July through September, outside major festival periods.

**Understand the "incrementality tax."** If your test shows that 40% of Meta-attributed installs are cannibalised (they would have happened organically), that does not mean you should cut Meta spend by 40%. It means your true Meta CPI is roughly 67% higher than reported. A channel with Rs30 reported CPI and 40% cannibalization has a true incremental CPI of Rs50. The question becomes whether Rs50 CPI still delivers acceptable marketing ROI for your unit economics.

**Translate results into budget decisions cautiously.** Incrementality results tell you the marginal value of the last rupee spent on a channel. They do not tell you what would happen if you cut the entire budget. A channel might show low incrementality at Rs10 lakh monthly spend but high incrementality at Rs5 lakh. The relationship between spend and incremental lift is rarely linear. What you are measuring is the value at the margin, not the average value across all spend. Cutting 20% might barely affect results. Cutting 60% might collapse the funnel entirely. Move in increments and re-test after each significant budget shift.

## Incrementality and Attribution: Complementary, Not Competing

Attribution and incrementality testing are complementary, not competing. Think of attribution as your daily instrument panel and incrementality as the quarterly calibration that checks whether those instruments are reading accurately.

Use attribution for daily campaign optimisation: which creatives work best, which ad sets to scale, which campaigns to pause. These are operational decisions that need real-time data and cannot wait for a four-week test cycle.

Use incrementality for quarterly strategic decisions: which channels are truly driving new users versus claiming credit for organic converts, where diminishing returns are setting in, and whether your branded search spend is genuinely incremental.

The best approach combines both. Your attribution data identifies which channels have the highest spend and the most suspicious ROAS patterns, which is exactly where to run your first incrementality test. Then use incrementality results to calibrate your attribution-based [budget reallocation framework](https://linkrunner.io/blog/the-marketing-budget-reallocation-framework-using-attribution-data) with evidence rather than assumption.

## Frequently Asked Questions

**What is the minimum monthly ad spend needed to run a meaningful incrementality test?**

At least Rs10 lakh per month on the specific channel being tested. Below this, daily conversion volume per geo is typically too low for statistical significance.

**How long should an incrementality test run before results are reliable?**

Minimum two weeks, ideally four. This captures enough conversion volume and accounts for day-of-week variance. Shorter tests rarely produce statistically significant results.

**Can I run incrementality tests on iOS campaigns with SKAN limitations?**

Yes. SKAN's aggregated reporting makes user-level analysis impossible, but geo-lift tests work on aggregate data. Compare total installs and revenue across test and control regions. Geo-lift is the most SKAN-compatible incrementality method.

**What is the difference between incrementality testing and A/B testing?**

A/B testing compares two variants (creatives, landing pages) to see which performs better. Incrementality testing measures causal impact by comparing presence versus absence of marketing activity. A/B tests optimise within a channel; incrementality tests evaluate if the channel itself delivers genuine value.

**Should I stop using attribution if incrementality shows different results?**

No. Use both together. Attribution is essential for daily campaign management and creative testing. Incrementality provides a periodic calibration check on whether attributed numbers reflect true business impact.

## Making Incrementality Work for Your Team

Incremental testing is not a one-off exercise. The most effective teams run one to two tests per quarter, rotating through their highest-spend channels. Start with the channel where you have the most doubt. Run a clean geo-lift test for four weeks. Measure the delta honestly. Then use those findings to adjust your next quarter's budget allocation with evidence behind the numbers.

The apps that will grow most efficiently in 2026 are not the ones that spend the most. They are the ones that know exactly what their spend is causing, down to the channel and campaign level. Measurement maturity has become a competitive advantage. If you are still making quarterly budget decisions based solely on reported ROAS without understanding true incrementality, you are already behind.

Start your first test this quarter. Get in touch with [Linkrunner](https://www.linkrunner.io/) if you need clean attribution data to design it properly.

Empowering marketing teams to make better data driven decisions to accelerate app growth!

Handled

2,288,376,664

api requests

For support, email us at

Address: HustleHub Tech Park, sector 2, HSR Layout,
Bangalore, Karnataka 560102, India

Empowering marketing teams to make better data driven decisions to accelerate app growth!

Handled

2,288,376,668

api requests

For support, email us at

Address: HustleHub Tech Park, sector 2, HSR Layout,
Bangalore, Karnataka 560102, India