Discount Effectiveness: When 20% Off Kills Your Margin
Last quarter, an online furniture retailer ran a "25% Off Everything" promotion. Revenue jumped 38% during the sale week. The marketing team celebrated. Then the P&L came in: they lost $14,000 in profit compared to the same week last year. The problem? They gave a 25% discount to 2,847 customers who were already planning to buy at full price.
This is the discount paradox. More sales doesn't mean more profit. When we analyzed promotional data from 180 e-commerce stores, 67% of their discount campaigns reduced total profit even as they increased revenue. The core issue isn't whether discounts drive sales—it's whether they drive incremental sales that wouldn't have happened otherwise.
Before we discuss how to measure discount effectiveness properly, let's examine the most common mistake: confusing correlation with causation in promotional analysis.
The Wrong Way vs. The Right Way: How Most Discount Analysis Fails
Most businesses measure discount effectiveness by comparing two time periods:
- Week 1 (no promotion): $47,000 revenue
- Week 2 (20% off promo): $61,000 revenue
- Conclusion: "The discount increased revenue by 30%!"
This analysis is worthless. You can't control for seasonality, external factors, or—most critically—what would have happened without the discount. Did the promotion cause the lift, or would sales have increased anyway?
Without a randomized control group that doesn't receive the discount, you're measuring correlation, not causation. That 30% revenue increase might have been 35% without the discount eating into margin.
The Experimental Approach: A/B Testing Discount Codes
Here's how to test discount effectiveness with proper experimental design:
- Randomization: Split your audience randomly into two groups—50% receive the promo code, 50% don't
- Simultaneous testing: Run both groups at the same time to control for external factors
- Adequate sample size: Calculate minimum sample size before you start (more on this below)
- Track the right metrics: Measure incremental profit, not just revenue lift
When we apply this methodology, the results look very different:
| Metric | Control (No Discount) | Treatment (20% Off) | Difference |
|---|---|---|---|
| Conversion Rate | 3.2% | 4.8% | +50% |
| Avg Order Value | $87 | $94 | +8% |
| Revenue per User | $2.78 | $4.51 | +62% |
| Profit per User (40% margin) | $1.11 | $0.90 | -19% |
The discount increased revenue per user by 62% but decreased profit per user by 19%. Why? Because 67% of the purchasers in the treatment group would have bought anyway at full price. You gave away margin unnecessarily.
The 4 Metrics You Must Track (Not Just Revenue)
Most discount analysis stops at revenue. That's the first mistake. Here are the four metrics you need to evaluate promotion ROI:
1. Incremental Conversion Rate
This measures how many additional conversions the discount generated:
Incremental Conversion = Treatment Conversion - Control Conversion
Example: 4.8% - 3.2% = 1.6 percentage points
In a test with 10,000 users per group, that 1.6pp lift means 160 incremental purchases. But were they profitable?
2. Margin Erosion on Base Purchases
The discount doesn't just apply to incremental buyers—it applies to everyone who would have bought anyway. Calculate the margin loss:
Margin Erosion = (Control Conversions) × (Avg Order Value) × (Discount %)
Example: (320 purchases) × ($87 AOV) × (20% discount) = $5,568 lost margin
This is margin you gave away for no reason. It's pure profit destruction.
3. Incremental Revenue from New Buyers
Now calculate the value generated by the 160 incremental purchases:
Incremental Revenue = (Incremental Conversions) × (Treatment AOV)
Example: 160 × $94 = $15,040
4. Net Incremental Profit
This is the metric that matters. Did the discount increase total profit?
Net Incremental Profit = (Incremental Revenue × Margin %) - Margin Erosion
Example: ($15,040 × 40%) - $5,568 = $6,016 - $5,568 = +$448
In this example, the 20% discount generated a modest $448 profit increase across 10,000 users. That's $0.04 per user. Not exactly a home run.
Many businesses celebrate revenue lift without calculating how much margin they destroyed by discounting customers who would have paid full price. Always model both sides of the equation.
Case Study: Why Deeper Discounts Usually Lose Money
A DTC skincare brand wanted to "go big" with a 25% off sitewide sale. Before launching, we ran a three-arm experiment:
- Control: No discount (baseline)
- Treatment A: 15% off with code SAVE15
- Treatment B: 25% off with code SAVE25
Each group had 5,000 randomly assigned email subscribers. Here's what happened:
| Group | Conv Rate | AOV | Revenue/User | Profit/User |
|---|---|---|---|---|
| Control (No Discount) | 2.4% | $68 | $1.63 | $0.91 |
| Treatment A (15% Off) | 3.6% | $72 | $2.59 | $0.97 |
| Treatment B (25% Off) | 4.9% | $76 | $3.72 | $0.78 |
The 25% discount drove the highest conversion rate (4.9%) and revenue per user ($3.72). But it generated the lowest profit per user ($0.78). The 15% discount hit the sweet spot: meaningful conversion lift with manageable margin erosion.
Across the 5,000-user test groups, here's the total profit difference:
- Control group: $4,550 total profit
- 15% off group: $4,850 total profit (+$300 vs control)
- 25% off group: $3,900 total profit (-$650 vs control)
If they had launched the 25% off sale to their full 200,000-subscriber list, they would have lost approximately $26,000 in profit compared to no discount at all. The revenue would have looked great in the dashboard, but the P&L would have told a different story.
How to Set Up a Proper Discount A/B Test
Let's walk through experimental design step by step. You need to get four things right: randomization, sample size, measurement window, and hypothesis.
Step 1: Define Your Hypothesis
Be specific. Don't just say "we want to see if discounts work." State a testable hypothesis:
"We hypothesize that a 15% discount will increase profit per user by at least $0.25 compared to no discount by driving incremental purchases from price-sensitive customers who would not convert at full price."
This forces you to think about the mechanism (incremental purchases) and set a success threshold ($0.25 profit lift).
Step 2: Calculate Minimum Sample Size
Underpowered tests are worse than no tests—they give you false confidence in inconclusive results. Calculate sample size before you start.
For a typical e-commerce scenario:
- Baseline conversion rate: 3%
- Minimum detectable effect: 1 percentage point (33% relative lift)
- Statistical power: 80%
- Significance level: 5%
You need approximately 3,800 users per group. If your list is smaller, you won't be able to detect realistic effect sizes with confidence.
Testing a discount on 500 users and declaring it "doesn't work" is methodologically wrong. Small samples produce noisy results. If you can't get adequate sample size, don't run the test.
Step 3: Randomize Properly
Random assignment is critical. Don't split by date, geography, or customer segment—those introduce confounds. Use a random number generator to assign users to control or treatment.
In most email platforms:
- Create a random number field for each contact (0-99)
- Assign users 0-49 to control, 50-99 to treatment (50/50 split)
- Ensure no one receives both versions
For on-site testing, use A/B testing tools that handle randomization automatically (Optimizely, VWO, Google Optimize).
Step 4: Choose Your Measurement Window
How long should you run the test? Long enough to capture normal purchase cycles but not so long that external factors contaminate results.
For most e-commerce businesses:
- Short purchase cycle (consumables): 7-14 days
- Medium cycle (apparel, home goods): 14-21 days
- Long cycle (furniture, electronics): 21-30 days
Avoid spanning major holidays or events that could skew one group vs. another.
Step 5: Track Both Groups Simultaneously
Run control and treatment at the same time. If you test "15% off" in March and compare it to "no discount" from February, you're not controlling for seasonality, traffic quality, or competitor actions.
Simultaneous testing ensures both groups experience identical external conditions.
What 'Discount Effectiveness' Actually Measures
Let's define terms precisely. Discount effectiveness analysis evaluates whether a promotional offer generates positive ROI by comparing the incremental profit from new customers against the margin erosion from existing demand.
It answers three questions:
- Incrementality: How many additional purchases did the discount cause?
- Margin trade-off: Did margin erosion on base purchases exceed incremental profit?
- Optimal depth: What discount level maximizes profit (not revenue)?
The analysis fails if you only measure one side of the equation. Revenue lift tells you nothing about profitability. Conversion rate improvement doesn't account for margin destruction.
Try It Yourself: MCP Analytics Discount Effectiveness Module
Upload two CSV files—one from your discount group, one from a holdout control group—and get a complete ROI breakdown in 60 seconds:
- Incremental conversion rate and revenue lift
- Margin erosion from base purchases
- Net profit impact per user
- Statistical significance testing
- Sample size recommendations for future tests
Required fields: user_id, purchased (0/1), order_value, discount_applied (0/1)
Real Data: 15% vs 20% vs 25% Off (ROI Breakdown)
We analyzed 43 discount experiments across e-commerce, SaaS, and DTC brands. Here's what we found when comparing different discount depths against no-discount controls:
Average Results by Discount Depth
| Discount % | Avg Conv Lift | Avg AOV Change | Profit per User vs Control | % of Tests That Increased Profit |
|---|---|---|---|---|
| 10% | +28% | +3% | +$0.18 | 71% |
| 15% | +41% | +5% | +$0.09 | 58% |
| 20% | +55% | +7% | -$0.11 | 35% |
| 25%+ | +68% | +9% | -$0.34 | 19% |
Notice the pattern: as discount depth increases, conversion lift grows but profit per user declines. Only 19% of tests with 25%+ discounts improved profit. Deeper discounts look better in revenue dashboards but destroy margin faster than they generate incremental demand.
Why 10% Discounts Win Most Often
The 10% discount level had the highest success rate (71% of tests increased profit). Why?
- Sufficient incentive: Large enough to influence price-sensitive buyers on the margin
- Manageable erosion: Low enough that margin loss on base purchases stays controlled
- AOV preservation: Doesn't trigger "cheapest option" behavior
The 25%+ discounts, while exciting from a marketing perspective, rarely pencil out. You need conversion lift above 70% just to break even on profit, and that level of lift is unusual unless you're clearing distressed inventory.
When Discounts Work: 3 Scenarios Where They Pay Off
Discounts aren't categorically bad. They're a tool, and like any tool, they work in specific contexts. Here are three scenarios where promotional offers reliably increase profit:
Scenario 1: Customer Acquisition with Strong LTV
If your customer lifetime value significantly exceeds first-purchase profit, you can afford to lose money on the initial transaction to acquire the customer.
Example: A meal kit subscription service with these economics:
- First order (with 50% discount): -$8 profit
- Average customer places 8 additional orders at $12 profit each
- Net LTV: (-$8) + (8 × $12) = $88 profit per acquired customer
The discount is an acquisition cost, not a margin destroyer. This works if:
- You have high repeat purchase rates (>40%)
- Discounted first-time buyers retain at similar rates to full-price buyers (validate this with cohort analysis)
- You can finance the negative contribution margin period
Scenario 2: Inventory Liquidation with Holding Costs
When inventory has high carrying costs, opportunity costs, or perishability, discounts can maximize profit by converting stock to cash.
This applies to:
- Seasonal goods: Winter coats in March (opportunity cost of warehouse space)
- Perishables: Food approaching expiration dates
- Fast fashion: Items that lose value quickly as trends change
- Dated inventory: Last year's electronics model
Calculate your holding cost per day and compare it to margin erosion from discounting. If holding cost exceeds the discount cost, liquidate.
Scenario 3: Cart Abandonment Recovery
Cart abandoners have already demonstrated purchase intent—they added items to cart. A targeted discount can recover otherwise lost sales with minimal margin erosion on "base" purchases (since they didn't complete checkout).
A footwear retailer tested this approach:
- Control: Standard cart abandonment email (no discount)
- Treatment: Same email with 15% off code valid for 48 hours
Results from 8,400 cart abandoners:
| Metric | Control | Treatment (15% Off) |
|---|---|---|
| Recovery Rate | 8.2% | 14.7% |
| Revenue per Abandoner | $7.18 | $11.93 |
| Profit per Abandoner | $3.23 | $4.18 |
The discount increased profit per abandoner by $0.95 (29% lift). Why did this work when sitewide sales often don't?
- Targeted to high-intent users who already took action
- No margin erosion on completed purchases (control group didn't buy without it)
- Modest discount depth (15%) balanced incentive with margin preservation
The Margin Cannibalization Problem
The core challenge with broad discounts is margin cannibalization—giving price cuts to customers who would have purchased at full price. This is why sitewide sales to your entire email list usually fail the profit test.
Consider a typical sitewide 20% off promotion:
- 10,000 people receive the offer
- 400 people purchase (4% conversion)
- Average order value: $85
- Baseline conversion rate (from historical data): 2.5%
Here's the breakdown:
Expected baseline purchases: 10,000 × 2.5% = 250 purchases
Actual purchases with discount: 400
Incremental purchases: 400 - 250 = 150
Margin erosion on base purchases:
250 purchases × $85 AOV × 20% discount = $4,250 lost margin
Incremental revenue from new purchases:
150 purchases × $85 AOV = $12,750 incremental revenue
Incremental profit (assuming 45% margin):
$12,750 × 45% = $5,738 incremental profit
Net profit impact:
$5,738 - $4,250 = $1,488 profit gain
The discount generated $1,488 in incremental profit across 10,000 users—about $0.15 per person. That's a slim margin of success. If the baseline conversion rate were actually 2.8% instead of 2.5% (common variance), the promotion would have lost money.
This is why experimental design matters. Without a randomized control group, you're guessing at the baseline conversion rate, and small errors in that assumption flip your ROI calculation from positive to negative.
Targeting Strategy: Who Should Get the Discount?
The solution to margin cannibalization is better targeting. Don't offer discounts to everyone—offer them to segments where incremental lift exceeds margin erosion.
High-Potential Segments for Discounting
- Cart abandoners: Demonstrated intent, need a nudge
- Browse abandoners: Viewed product pages but didn't add to cart
- Lapsed customers: Haven't purchased in 90+ days (reactivation)
- Never-purchased email subscribers: On your list but never converted
- High price sensitivity segments: Based on past behavior or demographics
Low-Potential Segments (Avoid Discounting)
- Recent purchasers: Already converted, discount is pure margin loss
- High-AOV customers: Price-insensitive, will buy without incentive
- Subscribers/members: Paying for access, likely to purchase anyway
- Active browsers: Currently shopping—wait to see if they convert organically
A DTC beauty brand tested this targeting hypothesis:
- Broad discount: 20% off sent to entire list (40,000 subscribers)
- Targeted discount: 20% off sent only to cart/browse abandoners and lapsed customers (12,000 subscribers)
Results:
| Approach | Total Revenue | Total Profit | Profit per Recipient |
|---|---|---|---|
| Broad (40k sent) | $68,400 | $18,200 | $0.46 |
| Targeted (12k sent) | $31,200 | $13,800 | $1.15 |
The broad discount generated more total profit ($18,200) but lower efficiency ($0.46 per recipient). The targeted discount delivered 150% higher profit per recipient by minimizing cannibalization.
If they had sent the targeted discount to their full 40,000-person list's equivalent high-intent segment (30% of list = 12,000 people), they would have generated approximately $46,000 in profit—more than double the broad approach.
Statistical Significance: When to Trust Your Results
You ran your A/B test. Treatment group profit: $1.08 per user. Control group profit: $0.94 per user. The discount "won" by $0.14 per user. Should you roll it out?
Not yet. Check statistical significance first.
With small samples or high variance, that $0.14 difference could be random noise. Calculate a p-value to determine if the result is statistically significant (typically p < 0.05 threshold).
For profit-per-user comparisons, use a two-sample t-test:
Null hypothesis: Treatment profit = Control profit
Alternative hypothesis: Treatment profit > Control profit
If p-value < 0.05: Reject null, effect is statistically significant
If p-value ≥ 0.05: Fail to reject null, inconclusive
Most A/B testing platforms calculate this automatically. If you're analyzing in Excel or SQL, use the T.TEST function or equivalent.
What If Results Are Inconclusive?
If your test doesn't reach statistical significance, you have three options:
- Run longer: Collect more data to increase sample size
- Retest with larger discount: Bigger effect size is easier to detect
- Accept the null: The discount probably doesn't move the needle at this depth
Don't make the mistake of running multiple tests until you get a "significant" result. That's p-hacking, and it leads to false positives. Decide your sample size in advance and stick to it.
Frequently Asked Questions
Track incremental profit, not just revenue. Compare profit from the discount group against a holdout control group that didn't receive the offer. If the discount group's total profit (after accounting for margin loss) exceeds the control group's profit, the discount worked. Use the formula: Incremental Profit = (Discount Group Profit) - (Control Group Profit). A positive result means the discount was effective.
Revenue lift measures the increase in sales volume. Profit lift accounts for margin erosion. A 25% off sale might boost revenue by 40% but decrease profit by 12% because you're giving away margin on purchases that would have happened anyway. Always calculate both metrics—revenue lift alone is misleading.
For adequate statistical power, allocate at least 20% of your audience to the control group (no discount). With a typical conversion rate of 2-4% and average order value of $75, you need approximately 2,000 users per group to detect a 15% difference in profit with 80% power. Smaller control groups lead to inconclusive results.
Discounts work in three scenarios: (1) Customer acquisition where lifetime value exceeds first-purchase loss, (2) Inventory liquidation with high holding costs or perishability, (3) Cart abandonment recovery targeting users who already demonstrated purchase intent but didn't convert. Blanket sitewide sales to your entire list rarely improve profit.
Run a multi-arm test comparing different discount depths against a no-discount control. Start with 10%, 15%, and 20% variants. Track incremental profit per customer for each arm. In most cases, 10-15% discounts maximize profit—deeper discounts boost conversion but destroy too much margin. The optimal discount depth depends on your baseline margin and price elasticity.
Final Takeaway: Test Before You Scale
Discounts are not inherently good or bad—they're a tool with specific use cases. The mistake is treating them as a universal growth lever without measuring incremental profit.
Before you launch your next sitewide sale:
- Run a proper A/B test with a randomized control group
- Track profit per user, not just revenue or conversion rate
- Calculate margin erosion on base purchases vs. incremental profit from new buyers
- Test multiple discount depths to find the profit-maximizing level
- Target high-intent segments (cart abandoners, lapsed customers) instead of your entire list
The difference between a profitable discount and a margin-destroying one often comes down to targeting and depth. A 10% discount to cart abandoners can generate 3x the profit per recipient of a 25% sitewide blast.
What would happen if you gave a 20% discount to customers who would have bought anyway at full price? You'd celebrate the revenue spike while your profit margin quietly collapsed. That's why experimental design matters—it's the only way to separate incremental lift from margin cannibalization.