You are spending money on Facebook Ads, Google Ads, influencer deals, and maybe a few experiments on TikTok or LinkedIn. Some of those campaigns are printing money. Others are quietly burning through budget with nothing to show for it. The problem is figuring out which is which — and by how much. This analysis calculates ROAS, CPA, CTR, and conversion rates across every campaign and channel, then tells you exactly what to scale, what to monitor, and what to kill. Upload a CSV of your campaign data and get the full picture in under 60 seconds.
What Is ROAS?
ROAS — Return on Ad Spend — is the simplest question in marketing: for every dollar I spent on advertising, how many dollars did I get back? If you spent $1,000 on Google Ads last month and those ads generated $4,200 in revenue, your ROAS is 4.2x. For every $1 spent on Google Ads, you got $4.20 back. That is the entire concept.
A ROAS above 1.0 means the campaign generated more revenue than it cost. A ROAS below 1.0 means you lost money. But the threshold that actually matters depends on your margins. If your product has a 60% gross margin, you need a ROAS of at least 1.67x just to break even (1 / 0.60). If your margin is 30%, you need a ROAS of 3.33x before you see any profit from your ad spend. This is why a blanket "ROAS of 3x is good" benchmark is misleading — it depends entirely on what you sell.
This analysis goes well beyond a single ROAS number. It calculates ROAS, CPA (cost per acquisition), CTR (click-through rate), CPL (cost per lead), and conversion rates for every campaign and every channel in your data. Then it compares them side by side so you can see exactly which campaigns are efficient, which are mediocre, and which are actively destroying value.
Why This Matters More Than You Think
Most marketing teams know their overall ROAS. Fewer can tell you the ROAS for each individual campaign. Almost none can tell you whether increasing spend on Campaign A by 20% would maintain the same efficiency or whether they have already hit diminishing returns. This analysis answers all three questions.
Consider a real scenario. You run 15 campaigns across Facebook, Google Search, and influencer partnerships. Your blended ROAS across all channels is 3.8x, which looks healthy. But when you break it down, Google Search brand campaigns are running at 12x (people searching your brand name were going to buy anyway), Facebook retargeting is at 6x, two Facebook prospecting campaigns are at 1.2x and barely breaking even, and one influencer campaign is at 0.4x — losing money on every dollar spent. Your "healthy" 3.8x average is masking a campaign that is literally burning cash and two others that need serious attention.
Budget allocation decisions based on blended averages are how marketing teams waste 20-40% of their spend. This analysis prevents that by giving you campaign-level visibility and explicit Scale/Monitor/Pause/Kill recommendations.
What Data Do You Need?
You need a CSV export of your campaign performance data. At minimum, you need four columns: a channel column (like "Facebook," "Google Search," "Influencer"), a campaign name column, a spend column (the amount you spent), and a sales or revenue column (the revenue generated). These four fields are required.
For richer analysis, include additional funnel metrics: impressions or reach (how many people saw the ad), clicks (how many clicked through), leads or form fills (how many took a pre-purchase action), and conversions or orders (how many bought). The more funnel stages you provide, the more the analysis can diagnose where your funnel is leaking — whether the problem is awareness (low impressions), engagement (low CTR), lead quality (low lead-to-sale conversion), or something else entirely.
A date column is optional but enables trend analysis so you can see whether a campaign's efficiency is improving or degrading over time. Each row in your CSV should represent one campaign's performance for a time period — daily, weekly, or monthly granularity all work. You need at least 10 campaigns (rows) for meaningful comparison, and 50-500 rows is the sweet spot for trend detection.
You can export this data from Facebook Ads Manager, Google Ads, your analytics platform, or even a spreadsheet where you track spend manually. The tool maps your columns to the required fields when you upload, so your headers do not need to match any specific format.
How to Read the Report
The report contains eight cards, each answering a specific question about your marketing performance.
Overview
The overview card summarizes your dataset — how many campaigns, channels, total spend, total revenue, and the overall blended ROAS. This is your baseline. If your overall ROAS looks bad, everything below will explain why. If it looks good, the cards below will show you whether that average is hiding underperformers.
ROAS by Campaign
A bar chart showing ROAS for every campaign in your data, sorted from highest to lowest. This is the money chart. You will immediately see which campaigns are delivering outsized returns and which are underwater. A horizontal reference line shows your ROAS target (if you set one), making it easy to see which campaigns clear the bar and which fall short. Campaigns at 0.8x ROAS jump off the page when they are sitting next to campaigns at 5x.
ROAS by Channel
The same ROAS comparison, but aggregated to the channel level — Facebook, Google, influencer, email, and so on. This answers the strategic question: which channels deserve more budget overall? It is common to find that Google Search campaigns collectively run at 4-8x ROAS, Facebook sits around 2-4x with better scale potential, and display or influencer campaigns vary wildly from 1x to 10x depending on the partnership. Channel-level ROAS drives your macro budget allocation; campaign-level ROAS drives your micro optimization within each channel.
Funnel Efficiency
This chart breaks the marketing funnel into stages — CTR (impressions to clicks), lead rate (clicks to leads), and conversion rate (leads to sales) — and shows each metric by channel. This is how you diagnose why a campaign has low ROAS. A campaign with great CTR but terrible conversion rate is attracting the wrong audience — lots of curiosity clicks but no purchase intent. A campaign with low CTR but high conversion rate has targeting dialed in perfectly but needs better creative to get more people to click. The funnel chart tells you exactly where to intervene.
Spend vs. Revenue
A scatter chart plotting spend on the x-axis and revenue on the y-axis for each campaign. Points above the diagonal line (where revenue equals spend) are profitable; points below are losing money. The distance from the line shows how profitable or unprofitable each campaign is. This visualization also reveals diminishing returns — if your high-spend campaigns cluster closer to the break-even line while low-spend campaigns sit well above it, you are hitting efficiency limits as you scale.
Campaign Scorecard
A full table with every metric for every campaign: spend, revenue, ROAS, CPA, CTR, conversion rate, and more. This is the reference table you export and bring to your budget meeting. Sort by any column to find your top performers, worst performers, most expensive acquisitions, or highest conversion rates. Every number in the charts above comes from this table.
Budget Recommendations
The actionable output. Each campaign gets a recommendation: Scale (increase budget — this campaign is efficient and has room to grow), Monitor (performing adequately but watch for degradation), Pause (underperforming — reduce spend and investigate), or Kill (losing money — stop spending immediately). The recommendations factor in ROAS relative to your target, absolute revenue contribution, and spend efficiency. A campaign with modest ROAS but massive revenue contribution will get a Monitor recommendation rather than a Kill, because pausing it would crater your total revenue.
Executive Summary (TL;DR)
AI-generated key findings that synthesize everything above into three to five bullet points. This card is designed to be copied directly into a Slack message or email to your CMO. It highlights the biggest opportunity (which campaign to scale), the biggest risk (which campaign to kill), and the estimated revenue impact of the recommended changes.
A Practical Example: E-Commerce Q4 Budget Planning
An e-commerce brand has $500,000 to allocate across four channels for Q4: Facebook, Google Search, Google Shopping, and influencer partnerships. They export six months of daily campaign data — 180 rows across 14 active campaigns — and upload the CSV.
The report shows: Google Search brand campaigns run at 12x ROAS but only absorb $30k/month (limited by search volume). Google Shopping runs at 5.2x. Facebook retargeting runs at 6.1x but only reaches people who already visited the site. Facebook prospecting runs at 2.1x — low efficiency but high absolute revenue because it reaches new customers. Two influencer campaigns show wildly different results: one at 7x (micro-influencer with a niche audience) and one at 0.6x (macro-influencer with broad but disengaged followers).
The budget recommendations say: scale the micro-influencer deal and Facebook retargeting, maintain Google Search and Shopping at current levels (search volume is the constraint, not budget), monitor Facebook prospecting (it is the only new-customer channel), and kill the underperforming influencer campaign immediately. The projected impact: reallocating the $40k/month from the failing influencer deal to the micro-influencer and Facebook retargeting should increase total revenue by roughly 25% with the same total spend.
Diminishing Returns Detection
One of the most valuable features is diminishing returns analysis. When you increase ad spend on a campaign, ROAS almost always decreases at some point. The first $1,000 on Facebook might generate $5,000 in revenue (5x ROAS). The next $1,000 might only generate $3,000 (3x marginal ROAS). The next $1,000 might generate $1,500 (1.5x marginal ROAS). At some point, the marginal dollar spent generates less than a dollar in return.
The spend vs. revenue chart reveals this pattern visually. If your data includes multiple time periods with varying spend levels for the same campaign, the analysis can identify the inflection point — the spend level beyond which additional budget starts yielding diminishing returns. This prevents the common mistake of "this campaign has great ROAS, let's 5x the budget" only to watch the ROAS collapse as the audience saturates.
Common Pitfalls
The most dangerous mistake is optimizing purely for ROAS without considering absolute revenue. Campaign A might have 8x ROAS on $5,000 in spend, generating $40,000 in revenue. Campaign B has 3x ROAS on $100,000 in spend, generating $300,000 in revenue. Killing Campaign B because its ROAS is "only" 3x would cost you $300,000 in revenue. The report's budget recommendations account for this by looking at both efficiency and scale.
Another pitfall: treating all campaigns as equivalent regardless of their funnel position. Brand search campaigns will always show high ROAS because those customers were already looking for you. Upper-funnel campaigns like display or video awareness will always show low ROAS on a last-click basis because they introduce new customers who then convert through a different channel. The funnel efficiency chart helps you see this by showing where each channel sits in the customer journey.
Finally, beware of campaigns with very low volume. A campaign with only two orders might show ROAS of 10x or 0.3x purely due to random variation. Set the min_spend_threshold parameter to filter out campaigns that have not spent enough to produce statistically stable metrics. The default is $0 (include everything), but setting it to something like $500 or $1,000 removes noise from campaigns that have barely run.
When to Use Something Else
If you want to understand how different channels interact and influence each other over time — how a YouTube ad today leads to a Google Search conversion next week — you need media mix modeling (MMM). ROAS analysis treats each campaign independently; MMM accounts for cross-channel effects and delayed conversions.
If your question is specifically about attribution — which touchpoint in a multi-step customer journey should get credit for the sale — you need multi-touch attribution analysis. ROAS analysis uses whatever attribution your data already reflects (usually last-click). It does not re-attribute conversions.
If you want to predict how revenue would change if you shifted $50,000 from Facebook to Google, a linear regression on spend vs. revenue can model that relationship, especially with enough historical variation in spend levels. The ROAS analysis tells you what happened; regression modeling tells you what would happen under different scenarios.
If you need to compare just two campaigns head-to-head with statistical significance — did Campaign A genuinely outperform Campaign B, or was it random variation? — an A/B test analysis gives you a proper hypothesis test with confidence intervals.
The R Code Behind the Analysis
Every report includes the exact R code used to produce the results — reproducible, auditable, and citable. This is not AI-generated code that changes every run. The same data produces the same analysis every time.
The analysis calculates ROAS as revenue / spend for each campaign, CPA as spend / conversions, CTR as clicks / impressions, and conversion rate as conversions / clicks. Diminishing returns detection fits a log-linear model to the spend-revenue relationship. Budget recommendations use a threshold-based scoring system that weighs ROAS against your target, absolute revenue contribution, and spend volume. Every calculation is visible in the code tab of your report, so you or an analyst can verify and extend the logic.