Promotional & Discount Analysis — Measure the Real ROI of Every Promotion

You ran a 20% off sale last month. Revenue went up. The team celebrated. But did profit actually increase, or did you just sell the same products for less money? Promotional analysis answers the question every retailer avoids: are your discounts making you richer or poorer? Upload your order history and find out in under 60 seconds.

What Is Promotional Analysis?

Promotional analysis compares the performance of discounted orders against full-price orders to determine whether your promotions are actually working. "Working" does not just mean more orders. It means more total profit after accounting for the margin you gave away, the additional volume you attracted, and the customer behavior changes the discount triggered.

The core question is simple: when you offer 15% off, does the extra volume more than compensate for the reduced margin on every unit? The answer varies dramatically by product category, discount depth, customer segment, and timing. A 10% discount on office supplies might drive enough incremental volume to increase total profit by 8%. The same 10% on furniture might just cannibalize full-price purchases, dropping profit by 12%. Without a structured analysis, you are guessing.

This module goes beyond simple before-and-after revenue comparisons. It segments your order data by discount depth (no discount, 1-10%, 10-20%, 20-30%, and so on), breaks results down by product category, and tests whether the differences are statistically significant — not just big enough to look interesting on a dashboard. The result is a clear picture of which promotions earn their keep and which ones are burning money.

Why Most Businesses Get Promotions Wrong

The most common mistake is measuring promotion success by revenue alone. Revenue almost always goes up during a sale — that is the whole point. But revenue is a vanity metric when it comes to promotions. The real question is what happened to profit, and most teams never check. A 30% off sale that doubles order volume sounds like a win until you realize each order now loses money after cost of goods and shipping.

The second mistake is treating all discounts the same. A 5% loyalty discount for returning customers is a fundamentally different lever than a 40% clearance markdown. Lumping them together in a "promoted vs. non-promoted" comparison hides the fact that shallow discounts might be highly profitable while deep discounts are destructive. This analysis breaks your orders into discount depth bins so you can see exactly where the profit breakeven point sits.

The third mistake is ignoring cannibalization. If customers were going to buy anyway and you gave them a coupon at checkout, you did not drive incremental revenue — you just reduced your margin on a sale that was already happening. This report compares order patterns, average quantities, and revenue per order across discount levels to surface cannibalization signals.

When to Use This Analysis

Run this analysis whenever you need to make a decision about promotional strategy. Common scenarios include:

Post-promotion review. You just finished a Black Friday sale, a seasonal clearance, or a coupon campaign. Before planning the next one, you need to know what actually happened to margins. Did the 20% off sale actually increase profit, or did it just shift purchases from the week before and after into the promotion window?

Discount level optimization. You are deciding between offering 10%, 15%, or 25% off. Historical data can show you the volume response curve at each discount level and identify the sweet spot where incremental volume outweighs margin loss. Some product categories are price-elastic (small discounts drive big volume increases) while others are inelastic (customers buy the same quantity regardless). This analysis tells you which is which.

Coupon effectiveness audit. You distribute coupons through email, social media, or influencer partnerships. Are these coupons reaching new customers who would not have purchased otherwise, or are they being redeemed by existing customers who would have bought at full price? The category and segment breakdowns help you answer this.

Seasonal promotion ROI. Every year you run the same holiday promotions because "we always have." This analysis lets you compare promotion periods against non-promotion periods to quantify the actual return, accounting for the margin you sacrificed.

What Data Do You Need?

You need a CSV of your order or transaction data with at least these columns: an order or transaction ID, a sales or revenue amount (numeric), and a discount percentage or amount (numeric, with 0 for orders that were not discounted). The module accepts discount as either a decimal proportion (0.2 means 20% off) or a percentage (20 means 20% off).

For the most useful results, also include product category (so the analysis can break down promotional impact by category), order date (to analyze trends over time), profit (so the module can measure margin impact directly rather than estimating it), customer ID (to analyze repeat purchase behavior), and quantity (to separate volume effects from price effects).

You need at least 50 orders, with a mix of discounted and non-discounted transactions. The more data you have, the more granular the category and discount-depth breakdowns become. The tool supports up to 5,000 rows on the free tier and 1,000,000 on business plans.

How to Read the Report

The report is organized into eight slides, each answering a specific question about your promotional performance. Here is what each one tells you and how to act on it.

Analysis Overview (Key Metrics + Data Overview)

The first slide gives you the headline numbers: total orders analyzed, the split between discounted and non-discounted orders, average discount depth, and the overall revenue and profit impact. The data overview card shows the shape of your input — how many categories, the date range covered, and any data quality notes. Start here to confirm the analysis ingested your data correctly and to get the top-line story before diving into details.

Primary Results (Discount Impact Visualization)

This is the main visualization — typically a comparison of revenue and profit metrics across discount depth bins. You will see side-by-side bars or a curve showing how average order value, total revenue, and profit change as discount depth increases. The AI insights panel highlights the key finding: where the profit breakeven point is, which discount range maximizes total profit, and whether deep discounts are destroying value. This slide answers the question "what is the optimal discount level?"

Distribution Analysis

The distribution slide shows how your orders spread across discount levels and categories. You might discover that 60% of your discounted orders cluster at one level (say 20% off) because that is the default coupon you distribute — meaning you have never actually tested whether 15% or 10% would work just as well with less margin sacrifice. The distribution also reveals whether certain categories are disproportionately discounted, which signals either strong promotional dependency or overuse of discounts.

Detailed Results and Summary Tables

Two side-by-side tables break down the numbers. The results table shows statistical comparisons between discounted and non-discounted orders — mean revenue, mean profit, effect size (Cohen's d), and p-values. The summary table provides category-level statistics so you can see which product lines benefit from promotions and which ones suffer. If the p-value for profit difference is below 0.05, the difference is statistically significant, not just noise. Cohen's d tells you the practical magnitude: values above 0.5 are medium effects, above 0.8 are large.

Diagnostics (Assumption and Validity Checks)

The diagnostic slide shows validation plots that assess whether the statistical comparisons are trustworthy. This includes checks for outliers that might skew averages, distribution shapes within discount groups, and sample size adequacy per group. If any group has fewer than 10 orders, the module flags it because small samples make statistical tests unreliable. The diagnostics card also notes whether the profit column was used directly or estimated from revenue and discount — direct profit data always produces more accurate results.

Model Performance

This slide pairs diagnostics metrics with performance metrics. It surfaces the statistical confidence levels across comparisons, the number of categories with sufficient data for reliable breakdowns, and any warnings about data quality issues that might affect the conclusions. Think of this as the "trust score" for the rest of the report — if the performance metrics are strong, you can act on the findings with confidence.

Technical Details (Summary Stats + Methods)

The technical slide documents the methodology: which statistical tests were used (typically Welch's t-test for two-group comparisons and ANOVA for multi-group discount bin comparisons), the significance level (default 0.05), discount bin boundaries, and the parameters used. The summary statistics card provides secondary metrics like median order values, standard deviations, and interquartile ranges. This slide exists for reproducibility — a finance team or external auditor can verify exactly how the conclusions were reached.

Insights and Recommendations

The final slide is a plain-language interpretation generated by AI that reads the statistical output and translates it into business recommendations. It will say things like "discounts above 20% in the Furniture category reduce profit per order by $47 on average without generating sufficient incremental volume to compensate" or "the 10-15% discount range in Technology drives 35% more orders with only a 6% margin reduction — this is your most efficient promotional lever." These insights are grounded in the statistical results from the prior slides, not generic advice.

Real-World Examples

Did the 20% off sale actually increase profit? An online retailer exported 8,000 orders from their Shopify store, including both promotional and non-promotional periods. The analysis showed that 20% discounts increased order volume by 40% but reduced average profit per order by 55%. Net result: the promotion reduced total profit by 11% compared to the baseline period. The insight panel recommended testing a 12% discount, which the volume-response curve suggested would be the profit-maximizing level.

Coupon effectiveness across channels. A DTC brand distributed coupons through email (10% off), Instagram (15% off), and an influencer partnership (25% off). By tagging orders with the discount source as the category column, the analysis revealed that email coupons had a 70% redemption rate among existing customers (pure cannibalization) while Instagram coupons brought in 60% new customers. The influencer's 25% discount drove volume but every order lost money after COGS.

Seasonal promotion ROI. A home goods retailer ran the same 30% holiday sale for three consecutive years. By loading all three years of order data with a date column, the time-trend analysis showed that customers had learned to delay purchases until the sale window — November full-price orders declined each year while December discount orders grew. The promotion was not creating demand; it was training customers to wait for markdowns.

Discount cannibalization by category. A B2B office supplies company offered volume discounts across all product lines. The category breakdown showed that paper and toner orders were completely price-inelastic — the same customers ordered the same quantities at any discount level, so every dollar off was pure margin loss. Meanwhile, furniture showed strong elasticity: a 15% discount on desks and chairs drove 2.3x the order volume, making it the only category where discounts made financial sense.

What Data Sources Work?

Any transaction-level export with order amounts and discount information works. Common sources include Shopify order exports (which include a discount column), WooCommerce order CSVs, Amazon seller transaction reports, Stripe payment exports combined with your coupon tracking, Square transaction histories, and custom ERP exports. If your data has a revenue or sales column and a discount column, you are ready to go. The module is platform-agnostic — it works with any CSV that has the right column structure, regardless of where the data originated.

When to Use Something Else

If you want to measure whether a specific promotion caused a sales lift (causal inference rather than correlation), consider a Bayesian A/B test or difference-in-differences analysis. Those methods require a control group that did not receive the promotion, which this module does not assume.

If your primary question is about optimal pricing rather than promotional effectiveness, the price elasticity module is more appropriate — it models the continuous relationship between price and demand rather than comparing discrete discount levels.

If you need to analyze profitability at the product level without a focus on discounts, the product profitability module provides margin analysis, contribution analysis, and Pareto classification across your full catalog.

The R Code Behind the Analysis

Every report includes the exact R code used to produce the results — reproducible, auditable, and citable. This is not AI-generated code that changes every run. The same data produces the same analysis every time.

The analysis uses Welch's t.test() for comparing discounted vs. non-discounted groups, aov() and TukeyHSD() for multi-group discount bin comparisons, and cohen.d() from the effsize package for practical effect sizes. Revenue and profit metrics are computed per discount bin and per category using base R aggregation. Time-series trends use aggregate() with configurable granularity (day, week, or month). Every step is visible in the code tab of your report, so you or your finance team can verify exactly what was done and reproduce the results independently.