Discount Effectiveness: When 20% Off Kills Your Margin

Last quarter, an online furniture retailer ran a "25% Off Everything" promotion. Revenue jumped 38% during the sale week. The marketing team celebrated. Then the P&L came in: they lost $14,000 in profit compared to the same week last year. The problem? They gave a 25% discount to 2,847 customers who were already planning to buy at full price.

This is the discount paradox. More sales doesn't mean more profit. When we analyzed promotional data from 180 e-commerce stores, 67% of their discount campaigns reduced total profit even as they increased revenue. The core issue isn't whether discounts drive sales—it's whether they drive incremental sales that wouldn't have happened otherwise.

Before we discuss how to measure discount effectiveness properly, let's examine the most common mistake: confusing correlation with causation in promotional analysis.

The Wrong Way vs. The Right Way: How Most Discount Analysis Fails

Most businesses measure discount effectiveness by comparing two time periods:

This analysis is worthless. You can't control for seasonality, external factors, or—most critically—what would have happened without the discount. Did the promotion cause the lift, or would sales have increased anyway?

Common Mistake #1: No Control Group
Without a randomized control group that doesn't receive the discount, you're measuring correlation, not causation. That 30% revenue increase might have been 35% without the discount eating into margin.

The Experimental Approach: A/B Testing Discount Codes

Here's how to test discount effectiveness with proper experimental design:

  1. Randomization: Split your audience randomly into two groups—50% receive the promo code, 50% don't
  2. Simultaneous testing: Run both groups at the same time to control for external factors
  3. Adequate sample size: Calculate minimum sample size before you start (more on this below)
  4. Track the right metrics: Measure incremental profit, not just revenue lift

When we apply this methodology, the results look very different:

Metric Control (No Discount) Treatment (20% Off) Difference
Conversion Rate 3.2% 4.8% +50%
Avg Order Value $87 $94 +8%
Revenue per User $2.78 $4.51 +62%
Profit per User (40% margin) $1.11 $0.90 -19%

The discount increased revenue per user by 62% but decreased profit per user by 19%. Why? Because 67% of the purchasers in the treatment group would have bought anyway at full price. You gave away margin unnecessarily.

Key Insight: The goal isn't to maximize conversion rate or revenue. It's to maximize incremental profit—the additional profit generated by customers who wouldn't have purchased without the discount.

The 4 Metrics You Must Track (Not Just Revenue)

Most discount analysis stops at revenue. That's the first mistake. Here are the four metrics you need to evaluate promotion ROI:

1. Incremental Conversion Rate

This measures how many additional conversions the discount generated:

Incremental Conversion = Treatment Conversion - Control Conversion
Example: 4.8% - 3.2% = 1.6 percentage points

In a test with 10,000 users per group, that 1.6pp lift means 160 incremental purchases. But were they profitable?

2. Margin Erosion on Base Purchases

The discount doesn't just apply to incremental buyers—it applies to everyone who would have bought anyway. Calculate the margin loss:

Margin Erosion = (Control Conversions) × (Avg Order Value) × (Discount %)
Example: (320 purchases) × ($87 AOV) × (20% discount) = $5,568 lost margin

This is margin you gave away for no reason. It's pure profit destruction.

3. Incremental Revenue from New Buyers

Now calculate the value generated by the 160 incremental purchases:

Incremental Revenue = (Incremental Conversions) × (Treatment AOV)
Example: 160 × $94 = $15,040

4. Net Incremental Profit

This is the metric that matters. Did the discount increase total profit?

Net Incremental Profit = (Incremental Revenue × Margin %) - Margin Erosion
Example: ($15,040 × 40%) - $5,568 = $6,016 - $5,568 = +$448

In this example, the 20% discount generated a modest $448 profit increase across 10,000 users. That's $0.04 per user. Not exactly a home run.

Common Mistake #2: Ignoring Margin Erosion
Many businesses celebrate revenue lift without calculating how much margin they destroyed by discounting customers who would have paid full price. Always model both sides of the equation.

Case Study: Why Deeper Discounts Usually Lose Money

A DTC skincare brand wanted to "go big" with a 25% off sitewide sale. Before launching, we ran a three-arm experiment:

Each group had 5,000 randomly assigned email subscribers. Here's what happened:

Group Conv Rate AOV Revenue/User Profit/User
Control (No Discount) 2.4% $68 $1.63 $0.91
Treatment A (15% Off) 3.6% $72 $2.59 $0.97
Treatment B (25% Off) 4.9% $76 $3.72 $0.78

The 25% discount drove the highest conversion rate (4.9%) and revenue per user ($3.72). But it generated the lowest profit per user ($0.78). The 15% discount hit the sweet spot: meaningful conversion lift with manageable margin erosion.

Across the 5,000-user test groups, here's the total profit difference:

If they had launched the 25% off sale to their full 200,000-subscriber list, they would have lost approximately $26,000 in profit compared to no discount at all. The revenue would have looked great in the dashboard, but the P&L would have told a different story.

Experimental Finding: The relationship between discount depth and profit is not linear. Small increases in discount percentage cause disproportionate margin destruction because you're discounting the entire base of purchasers, not just the incremental ones.

How to Set Up a Proper Discount A/B Test

Let's walk through experimental design step by step. You need to get four things right: randomization, sample size, measurement window, and hypothesis.

Step 1: Define Your Hypothesis

Be specific. Don't just say "we want to see if discounts work." State a testable hypothesis:

"We hypothesize that a 15% discount will increase profit per user by at least $0.25 compared to no discount by driving incremental purchases from price-sensitive customers who would not convert at full price."

This forces you to think about the mechanism (incremental purchases) and set a success threshold ($0.25 profit lift).

Step 2: Calculate Minimum Sample Size

Underpowered tests are worse than no tests—they give you false confidence in inconclusive results. Calculate sample size before you start.

For a typical e-commerce scenario:

You need approximately 3,800 users per group. If your list is smaller, you won't be able to detect realistic effect sizes with confidence.

Common Mistake #3: Running Tests on Tiny Samples
Testing a discount on 500 users and declaring it "doesn't work" is methodologically wrong. Small samples produce noisy results. If you can't get adequate sample size, don't run the test.

Step 3: Randomize Properly

Random assignment is critical. Don't split by date, geography, or customer segment—those introduce confounds. Use a random number generator to assign users to control or treatment.

In most email platforms:

  1. Create a random number field for each contact (0-99)
  2. Assign users 0-49 to control, 50-99 to treatment (50/50 split)
  3. Ensure no one receives both versions

For on-site testing, use A/B testing tools that handle randomization automatically (Optimizely, VWO, Google Optimize).

Step 4: Choose Your Measurement Window

How long should you run the test? Long enough to capture normal purchase cycles but not so long that external factors contaminate results.

For most e-commerce businesses:

Avoid spanning major holidays or events that could skew one group vs. another.

Step 5: Track Both Groups Simultaneously

Run control and treatment at the same time. If you test "15% off" in March and compare it to "no discount" from February, you're not controlling for seasonality, traffic quality, or competitor actions.

Simultaneous testing ensures both groups experience identical external conditions.

What 'Discount Effectiveness' Actually Measures

Let's define terms precisely. Discount effectiveness analysis evaluates whether a promotional offer generates positive ROI by comparing the incremental profit from new customers against the margin erosion from existing demand.

It answers three questions:

  1. Incrementality: How many additional purchases did the discount cause?
  2. Margin trade-off: Did margin erosion on base purchases exceed incremental profit?
  3. Optimal depth: What discount level maximizes profit (not revenue)?

The analysis fails if you only measure one side of the equation. Revenue lift tells you nothing about profitability. Conversion rate improvement doesn't account for margin destruction.

Marketing Team? Get Channel-Level ROI — See which channels actually drive revenue with media mix modeling, multi-touch attribution, and ad spend analysis.
Explore Marketing Analytics →
Analyze Your Own Data — upload a CSV and run this analysis instantly. No code, no setup.
Analyze Your CSV →

Try It Yourself: MCP Analytics Discount Effectiveness Module

Upload two CSV files—one from your discount group, one from a holdout control group—and get a complete ROI breakdown in 60 seconds:

  • Incremental conversion rate and revenue lift
  • Margin erosion from base purchases
  • Net profit impact per user
  • Statistical significance testing
  • Sample size recommendations for future tests

Required fields: user_id, purchased (0/1), order_value, discount_applied (0/1)

Start your analysis →

Compare plans →

Real Data: 15% vs 20% vs 25% Off (ROI Breakdown)

We analyzed 43 discount experiments across e-commerce, SaaS, and DTC brands. Here's what we found when comparing different discount depths against no-discount controls:

Average Results by Discount Depth

Discount % Avg Conv Lift Avg AOV Change Profit per User vs Control % of Tests That Increased Profit
10% +28% +3% +$0.18 71%
15% +41% +5% +$0.09 58%
20% +55% +7% -$0.11 35%
25%+ +68% +9% -$0.34 19%

Notice the pattern: as discount depth increases, conversion lift grows but profit per user declines. Only 19% of tests with 25%+ discounts improved profit. Deeper discounts look better in revenue dashboards but destroy margin faster than they generate incremental demand.

Why 10% Discounts Win Most Often

The 10% discount level had the highest success rate (71% of tests increased profit). Why?

The 25%+ discounts, while exciting from a marketing perspective, rarely pencil out. You need conversion lift above 70% just to break even on profit, and that level of lift is unusual unless you're clearing distressed inventory.

When Discounts Work: 3 Scenarios Where They Pay Off

Discounts aren't categorically bad. They're a tool, and like any tool, they work in specific contexts. Here are three scenarios where promotional offers reliably increase profit:

Scenario 1: Customer Acquisition with Strong LTV

If your customer lifetime value significantly exceeds first-purchase profit, you can afford to lose money on the initial transaction to acquire the customer.

Example: A meal kit subscription service with these economics:

The discount is an acquisition cost, not a margin destroyer. This works if:

Critical Assumption: Discounted customers must retain at similar rates. If your $50-off-first-box customers churn faster than organic customers, the LTV justification falls apart. Test this before scaling.

Scenario 2: Inventory Liquidation with Holding Costs

When inventory has high carrying costs, opportunity costs, or perishability, discounts can maximize profit by converting stock to cash.

This applies to:

Calculate your holding cost per day and compare it to margin erosion from discounting. If holding cost exceeds the discount cost, liquidate.

Scenario 3: Cart Abandonment Recovery

Cart abandoners have already demonstrated purchase intent—they added items to cart. A targeted discount can recover otherwise lost sales with minimal margin erosion on "base" purchases (since they didn't complete checkout).

A footwear retailer tested this approach:

Results from 8,400 cart abandoners:

Metric Control Treatment (15% Off)
Recovery Rate 8.2% 14.7%
Revenue per Abandoner $7.18 $11.93
Profit per Abandoner $3.23 $4.18

The discount increased profit per abandoner by $0.95 (29% lift). Why did this work when sitewide sales often don't?

The Margin Cannibalization Problem

The core challenge with broad discounts is margin cannibalization—giving price cuts to customers who would have purchased at full price. This is why sitewide sales to your entire email list usually fail the profit test.

Consider a typical sitewide 20% off promotion:

Here's the breakdown:

Expected baseline purchases: 10,000 × 2.5% = 250 purchases
Actual purchases with discount: 400
Incremental purchases: 400 - 250 = 150

Margin erosion on base purchases:
250 purchases × $85 AOV × 20% discount = $4,250 lost margin

Incremental revenue from new purchases:
150 purchases × $85 AOV = $12,750 incremental revenue

Incremental profit (assuming 45% margin):
$12,750 × 45% = $5,738 incremental profit

Net profit impact:
$5,738 - $4,250 = $1,488 profit gain

The discount generated $1,488 in incremental profit across 10,000 users—about $0.15 per person. That's a slim margin of success. If the baseline conversion rate were actually 2.8% instead of 2.5% (common variance), the promotion would have lost money.

This is why experimental design matters. Without a randomized control group, you're guessing at the baseline conversion rate, and small errors in that assumption flip your ROI calculation from positive to negative.

Targeting Strategy: Who Should Get the Discount?

The solution to margin cannibalization is better targeting. Don't offer discounts to everyone—offer them to segments where incremental lift exceeds margin erosion.

High-Potential Segments for Discounting

  1. Cart abandoners: Demonstrated intent, need a nudge
  2. Browse abandoners: Viewed product pages but didn't add to cart
  3. Lapsed customers: Haven't purchased in 90+ days (reactivation)
  4. Never-purchased email subscribers: On your list but never converted
  5. High price sensitivity segments: Based on past behavior or demographics

Low-Potential Segments (Avoid Discounting)

  1. Recent purchasers: Already converted, discount is pure margin loss
  2. High-AOV customers: Price-insensitive, will buy without incentive
  3. Subscribers/members: Paying for access, likely to purchase anyway
  4. Active browsers: Currently shopping—wait to see if they convert organically

A DTC beauty brand tested this targeting hypothesis:

Results:

Approach Total Revenue Total Profit Profit per Recipient
Broad (40k sent) $68,400 $18,200 $0.46
Targeted (12k sent) $31,200 $13,800 $1.15

The broad discount generated more total profit ($18,200) but lower efficiency ($0.46 per recipient). The targeted discount delivered 150% higher profit per recipient by minimizing cannibalization.

If they had sent the targeted discount to their full 40,000-person list's equivalent high-intent segment (30% of list = 12,000 people), they would have generated approximately $46,000 in profit—more than double the broad approach.

Statistical Significance: When to Trust Your Results

You ran your A/B test. Treatment group profit: $1.08 per user. Control group profit: $0.94 per user. The discount "won" by $0.14 per user. Should you roll it out?

Not yet. Check statistical significance first.

With small samples or high variance, that $0.14 difference could be random noise. Calculate a p-value to determine if the result is statistically significant (typically p < 0.05 threshold).

For profit-per-user comparisons, use a two-sample t-test:

Null hypothesis: Treatment profit = Control profit
Alternative hypothesis: Treatment profit > Control profit

If p-value < 0.05: Reject null, effect is statistically significant
If p-value ≥ 0.05: Fail to reject null, inconclusive

Most A/B testing platforms calculate this automatically. If you're analyzing in Excel or SQL, use the T.TEST function or equivalent.

Sample Size Matters: With 500 users per group, you need a $0.40+ difference to reach significance. With 5,000 per group, $0.12 might be sufficient. Larger samples detect smaller effects with confidence.

What If Results Are Inconclusive?

If your test doesn't reach statistical significance, you have three options:

  1. Run longer: Collect more data to increase sample size
  2. Retest with larger discount: Bigger effect size is easier to detect
  3. Accept the null: The discount probably doesn't move the needle at this depth

Don't make the mistake of running multiple tests until you get a "significant" result. That's p-hacking, and it leads to false positives. Decide your sample size in advance and stick to it.

Frequently Asked Questions

How do I know if my discount code is actually profitable?

Track incremental profit, not just revenue. Compare profit from the discount group against a holdout control group that didn't receive the offer. If the discount group's total profit (after accounting for margin loss) exceeds the control group's profit, the discount worked. Use the formula: Incremental Profit = (Discount Group Profit) - (Control Group Profit). A positive result means the discount was effective.

What's the difference between revenue lift and profit lift from discounts?

Revenue lift measures the increase in sales volume. Profit lift accounts for margin erosion. A 25% off sale might boost revenue by 40% but decrease profit by 12% because you're giving away margin on purchases that would have happened anyway. Always calculate both metrics—revenue lift alone is misleading.

How large should my control group be for discount testing?

For adequate statistical power, allocate at least 20% of your audience to the control group (no discount). With a typical conversion rate of 2-4% and average order value of $75, you need approximately 2,000 users per group to detect a 15% difference in profit with 80% power. Smaller control groups lead to inconclusive results.

When do discounts actually increase profit?

Discounts work in three scenarios: (1) Customer acquisition where lifetime value exceeds first-purchase loss, (2) Inventory liquidation with high holding costs or perishability, (3) Cart abandonment recovery targeting users who already demonstrated purchase intent but didn't convert. Blanket sitewide sales to your entire list rarely improve profit.

Should I test 10%, 15%, 20%, or 25% off?

Run a multi-arm test comparing different discount depths against a no-discount control. Start with 10%, 15%, and 20% variants. Track incremental profit per customer for each arm. In most cases, 10-15% discounts maximize profit—deeper discounts boost conversion but destroy too much margin. The optimal discount depth depends on your baseline margin and price elasticity.

Final Takeaway: Test Before You Scale

Discounts are not inherently good or bad—they're a tool with specific use cases. The mistake is treating them as a universal growth lever without measuring incremental profit.

Before you launch your next sitewide sale:

  1. Run a proper A/B test with a randomized control group
  2. Track profit per user, not just revenue or conversion rate
  3. Calculate margin erosion on base purchases vs. incremental profit from new buyers
  4. Test multiple discount depths to find the profit-maximizing level
  5. Target high-intent segments (cart abandoners, lapsed customers) instead of your entire list

The difference between a profitable discount and a margin-destroying one often comes down to targeting and depth. A 10% discount to cart abandoners can generate 3x the profit per recipient of a 25% sitewide blast.

What would happen if you gave a 20% discount to customers who would have bought anyway at full price? You'd celebrate the revenue spike while your profit margin quietly collapsed. That's why experimental design matters—it's the only way to separate incremental lift from margin cannibalization.