You ran five campaigns last year. Your boss wants to know which ones drove revenue and which were wasted spend. Last-touch attribution says Campaign 5 did everything. First-touch says Campaign 1 deserves all the credit. Neither answer is right. Shapley value attribution distributes credit fairly across every campaign a customer touched before converting — grounded in cooperative game theory, not gut feel. Upload a CSV and get the answer in under 60 seconds.
The Attribution Problem Every Marketing Team Faces
A customer sees your Facebook ad on Monday, clicks a Google search result on Wednesday, opens your email campaign on Friday, and buys on Saturday through a direct visit. Which channel gets the credit? First-touch says Facebook. Last-touch says direct. Linear attribution splits credit equally — which feels democratic but ignores the reality that not every touchpoint matters the same amount.
This is not an academic debate. The answer determines where your budget goes next quarter. If you credit the wrong channel, you pour money into a campaign that was along for the ride while starving the one that actually moved customers to convert. Most marketing teams default to last-touch because it is simple. Simple, and wrong — it systematically overvalues the final interaction and undervalues everything that built awareness and consideration.
Shapley value attribution, borrowed from cooperative game theory, solves this by asking: for every possible combination of campaigns a customer could have been exposed to, how much did adding this particular campaign change the outcome? It considers every permutation, calculates the marginal contribution of each campaign, and averages across all orderings. The result is a fair, mathematically grounded allocation of credit that sums to 100%.
What This Analysis Covers
The report produces six output cards that give you a complete picture of campaign and channel performance. Each card answers a specific question that marketing teams ask during budget planning.
Campaign Attribution (Shapley Values)
The centerpiece of the report. This card ranks each campaign by its Shapley value — the fraction of total conversion credit it earned. A campaign with a Shapley value of 0.28 drove 28% of all conversions. The visualization makes it immediately obvious which campaigns are pulling their weight and which are not. If Campaign 3 sits at 0.30 while Campaign 2 sits at 0.05, you know where to double down and where to cut.
This is fundamentally different from looking at raw conversion rates. A campaign might have a high acceptance rate but low attribution because the customers who accepted it would have converted anyway through other campaigns. Shapley values capture that nuance — they measure the incremental contribution, not just the correlation.
Channel Revenue Attribution
Beyond campaigns, the report breaks down revenue by purchase channel: web, catalog, in-store, and deals. For each channel, you see total revenue contribution, revenue per transaction, and the percentage of customers who used that channel. This answers the perennial question: is our catalog investment paying off compared to digital? The answer often surprises — catalog customers frequently show the highest revenue per purchase ($120+ vs. $85 for web) even though web drives more total volume.
Campaign Performance Summary Table
A detailed table showing each campaign's acceptance rate, conversion rate among acceptors, average spend of responders, and Shapley attribution value side by side. This is the table you put in front of the CMO. It compresses everything into one view: Campaign 4 had 7% acceptance but the highest average spend among responders, while Campaign 1 had 6% acceptance and the lowest attribution score. The table makes the budget conversation concrete.
Channel Performance Summary Table
The channel equivalent: purchase count, total revenue, revenue per purchase, and customer reach for each channel. Web might account for 40% of transactions but only 35% of revenue because deal-based purchases drag down the average order value. Store purchases might serve only 20% of customers but generate 30% of revenue. These ratios drive channel investment decisions.
Segment Response Patterns
When income data is available, the report segments campaign acceptance rates by customer income bracket. This reveals whether your campaigns are reaching the right customers. A common finding: high-income customers respond to different campaigns than mid-income customers. Campaign 5 might show a 15% acceptance rate among high-income segments but only 3% among lower segments — suggesting it works as a premium positioning play, not a broad acquisition tool.
Spend Category Breakdown
Revenue decomposed by product category (or spend type). This connects campaign attribution to what customers actually buy. If Campaign 3 drives conversions primarily in high-margin categories like premium products, its true value exceeds what the raw Shapley number suggests. If Campaign 2 drives volume in low-margin deal categories, its contribution looks worse when you factor in profitability.
What Data Do You Need?
You need customer-level marketing data in a CSV. Each row represents one customer. The required columns are:
Campaign acceptance flags — binary columns (0 or 1) indicating whether each customer accepted or responded to each campaign. You need at least one campaign column, but the analysis is most valuable with three or more campaigns. Map these to cmp1_accepted through cmp5_accepted.
Purchase channel counts — how many purchases each customer made through each channel (web, catalog, store, deals). These are counts, not revenue. Map them to web_purchases, catalog_purchases, store_purchases, and deals_purchases.
Final conversion flag — a binary column indicating whether the customer ultimately converted (purchased, subscribed, responded to the final campaign). This is the outcome variable that Shapley values are computed against.
Optional but valuable: customer income (enables segment analysis), recency in days (how recently they purchased), web visits (engagement indicator), and spend columns by category (enables the spend breakdown card). The more context you provide, the richer the report.
For reliable results, aim for at least 100 customers with a conversion rate between 5% and 50%. Each campaign should have at least 10 acceptors so the Shapley calculation has enough signal. Very low acceptance rates (under 1%) make it hard to distinguish campaign effects from noise.
How to Read the Report
Start with the Shapley value chart. The campaigns are ranked from highest to lowest attribution. If the top two campaigns account for more than 50% of total Shapley credit, your marketing is concentrated — a few campaigns do the heavy lifting. If all campaigns cluster near equal values (each around 20% for five campaigns), the model found no meaningful differentiation, and you should investigate whether your campaigns are truly reaching different audiences.
Next, cross-reference with the campaign summary table. A campaign can have high attribution despite a low acceptance rate if the customers who accepted it were disproportionately likely to convert. Conversely, a campaign with a high acceptance rate might show low attribution if those customers would have converted anyway. The gap between acceptance rate and Shapley value is where the insight lives.
The channel cards tell you where to invest infrastructure. If catalog generates the highest revenue per purchase but reaches only 15% of customers, expanding catalog distribution could be a growth lever. If deals drive volume but depress average order value, you may be training customers to wait for discounts — a common and expensive pattern.
The segment card shows you whether your campaigns are well-targeted. If high-income customers respond strongly to Campaign 4 but ignore Campaign 2, while mid-income customers do the opposite, you have evidence for segmented campaign delivery. Run different campaigns for different segments instead of blasting the entire list with everything.
Real-World Examples
E-commerce budget reallocation. An online retailer ran five email campaigns over 12 months. Last-touch attribution credited Campaign 5 with 60% of conversions because it was the most recent. Shapley attribution revealed Campaign 3 (a seasonal promotion) actually drove 28% of conversions — higher than any other single campaign — because it moved customers from consideration to intent. Campaign 5 was often the final nudge, but Campaign 3 did the hard work. The retailer shifted 20% of Campaign 5's budget to run Campaign 3 twice per year instead of once.
Channel mix optimization. A multi-channel retailer assumed web was their dominant channel because it processed the most orders. The channel attribution showed catalog generated $142 per purchase versus $87 for web. Store purchases, though only 18% of volume, had the second-highest per-transaction value at $108. The company reversed a planned cut to their catalog program and invested in store experience instead.
Campaign ROI comparison. A B2B SaaS company mapped five outreach campaigns (webinar invites, case study sends, free trial offers, demo requests, and pricing page promotions) into the attribution framework. Shapley analysis showed webinar invites had 3x the attribution of case study sends despite similar send volumes. The insight: customers who attended webinars were genuinely more likely to convert, while case study readers converted at the same rate as customers who received no case study — the content was not influencing decisions.
Segment-level targeting. A subscription box service segmented attribution by customer income. High-income customers converted primarily through Campaign 1 (premium positioning) and Campaign 4 (curated selection). Mid-income customers responded to Campaign 2 (discount offer) and Campaign 5 (referral incentive). The company stopped sending discount campaigns to high-income segments — those campaigns were not just ineffective for that group, they were diluting brand perception.
When to Use Something Else
If you want to measure the return on ad spend for specific paid channels, a ROAS analysis is more direct. It takes your ad platform export and calculates cost per conversion, return per dollar spent, and break-even points. Attribution analysis tells you which campaigns matter; ROAS tells you whether each dollar of spend was profitable.
If you need to model the relationship between overall marketing spend and business outcomes across multiple channels simultaneously, media mix modeling (MMM) is the right tool. MMM uses regression on aggregate time-series data — weekly spend by channel versus weekly revenue — to estimate the contribution and diminishing returns of each channel. It works with aggregate data and handles saturation effects that Shapley attribution does not.
If you need to prove that a specific campaign change caused a lift in conversions (not just correlated with it), run a controlled A/B test with a holdout group. Attribution analysis shows association — customers who accepted Campaign 3 were more likely to convert — but cannot distinguish whether Campaign 3 caused the conversion or whether conversion-prone customers were simply more likely to accept Campaign 3. A randomized holdout gives you causal evidence.
If you want to understand which customer characteristics predict campaign response (income, recency, purchase history), a logistic regression on campaign acceptance can identify the profile of likely responders. Use it alongside attribution: attribution tells you which campaigns matter, logistic regression tells you who to target with each campaign.
The R Code Behind the Analysis
Every report includes the exact R code used to produce the results — reproducible, auditable, and citable. This is not AI-generated code that changes every run. The same data produces the same analysis every time.
The Shapley value calculation uses combinatorial enumeration of campaign subsets to compute each campaign's marginal contribution to conversions. Channel revenue attribution uses base R aggregation functions. Income segmentation uses cut() for binning and tapply() for group-level acceptance rates. Every step is visible in the code tab of your report, so you or an analyst can verify exactly what was done and reproduce it independently.