Running one SEO experiment is useful. Running ten at once and understanding the portfolio-level impact is transformative. GSC Portfolio Analysis takes your Google Search Console experiment data — multiple pages, control and treatment groups, impressions, clicks, CTR, and position — and delivers batch verdicts, effect size distributions, ROI projections, and pattern detection across your entire experiment portfolio. Upload your CSV and see which experiments won, which lost, and what the aggregate impact looks like.
What Is GSC Portfolio Analysis?
When you run SEO experiments — changing title tags, meta descriptions, or page content — you typically analyze each experiment individually. Did the new title increase CTR? Did the meta description change affect impressions? One-at-a-time analysis works for a single test, but if you are running experiments across your blog, your product pages, and your landing pages simultaneously, you need a portfolio view that answers bigger questions: What is my overall win rate? Are certain types of changes more effective? What is the cumulative click impact of all my experiments?
GSC Portfolio Analysis treats your experiments as a managed portfolio, similar to how an investment manager evaluates a portfolio of trades rather than individual positions. Each experiment gets a verdict (win, loss, or inconclusive) based on statistical significance and effect size. Then the portfolio is analyzed as a whole — overall win rate, average effect size, distribution of outcomes, and projected ROI in additional clicks. The pattern detection layer looks for common traits among winning experiments — did title changes that added numbers outperform those that added emotional language? Did experiments on high-traffic pages succeed more often than those on low-traffic pages?
This is the analysis that moves SEO experimentation from ad hoc to systematic. Instead of running experiments and hoping for the best, you can see your hit rate, learn from patterns, and project the value of your next batch of tests.
When to Use GSC Portfolio Analysis
The ideal time to run this analysis is after a batch of SEO experiments has completed — typically 2-4 weeks after making changes, once Search Console has collected enough data to judge outcomes. If you changed title tags on 15 pages three weeks ago, export the before-and-after GSC data for those pages and run the portfolio analysis.
Monthly experiment reviews are a natural cadence. If your team runs a steady stream of title tests, meta description experiments, or content updates, the portfolio analysis gives you the monthly scorecard: how many experiments ran, how many won, what the aggregate impact was on clicks and impressions, and which patterns to double down on next month.
Strategic planning also benefits from this analysis. If you are debating whether to invest more in SEO experimentation, the portfolio analysis quantifies the ROI. A portfolio with a 60% win rate and an average +15% CTR lift per winning experiment has a clear, projectable value that justifies continued investment and headcount.
Cross-functional alignment is another important trigger. SEO teams often struggle to communicate results to stakeholders who do not understand individual test metrics. A portfolio summary with an overall win rate, aggregate click gains, and projected annual impact translates SEO experimentation into language that executives and budget holders understand. Instead of "we improved CTR by 0.8 percentage points on one article," you can say "our Q1 experiment portfolio produced 12 wins out of 20 tests, generating an estimated 4,500 additional annual clicks worth $13,000 in equivalent paid traffic."
What Data Do You Need?
You need a CSV with one row per experiment observation. The eight required columns are: exp_id (a unique identifier for each experiment), group (control or treatment), page_url (the page being tested), search_impressions, search_clicks, click_rate (CTR as a decimal), serp_position (average ranking position), and adjusted_ctr (position-adjusted CTR that accounts for ranking changes). Two optional columns add depth: query (the search query, enabling query-level analysis) and period_date (date stamps, enabling time-series views).
The simplest way to build this dataset: for each experiment, export GSC page-level data for the control period (before the change) and the treatment period (after the change). Add an exp_id column to identify the experiment and a group column to label control vs. treatment rows. Stack all experiments into one CSV.
Parameters include batch_assignments (default "auto" — automatically groups experiments into batches by timing), content_type_regex (classifies pages by URL pattern for content-type breakdowns), min_impressions (default 10 — filters out low-data experiments), and win_threshold (default 0.0 — the minimum CTR lift required to count as a win).
How to Read the Report
The report delivers nine sections that progress from individual experiment verdicts to portfolio-level insights:
Experiment Verdicts. A table showing every experiment with its verdict: Win, Loss, or Inconclusive. Each row shows the control CTR, treatment CTR, absolute and relative lift, confidence interval, and p-value. Color-coded so wins are immediately visible. This is where you quickly see which changes worked.
Batch Comparison. If experiments were run in multiple batches (different time periods), this section compares batch-level performance. You can see if your experimentation quality is improving over time — is your win rate going up? Are your effect sizes getting larger as you learn from past results?
Pattern Detection. The most strategically valuable section. The analysis looks for patterns among winning experiments vs. losing experiments — common URL patterns, page types, traffic levels, or experiment characteristics that predict success. This turns your experiment history into a playbook for future tests.
Effect Size Distribution. A histogram or density plot showing the distribution of CTR changes across all experiments. This tells you the typical magnitude of your experiments. If most experiments produce tiny effects (plus or minus 2%), you might need bolder changes. If the distribution has a long right tail, your winning experiments are producing outsized gains.
ROI Projection. Extrapolates the cumulative click impact of your winning experiments over time. If five experiments each gained 50 additional clicks per month, the annualized impact is 3,000 additional clicks. This section turns statistical results into business metrics that stakeholders understand.
Data Sufficiency. Flags experiments that may not have enough data for reliable conclusions. Low-impression experiments produce noisy CTR estimates. This section tells you which verdicts to trust and which need more data collection time.
TL;DR. Portfolio-level executive summary: win rate, average effect size, total projected click impact, and top action items.
When to Use Something Else
If you want to analyze a single SEO title experiment in depth — with forest plots, position-adjusted CTR comparisons, and detailed statistical tests — use the Title A/B Test module. It is designed for deep analysis of one experiment at a time, while Portfolio Analysis is designed for batch evaluation of many experiments.
If you are not running formal experiments but want to find SEO opportunities in your existing GSC data, use the GSC Quick Wins module. It identifies pages ranking 4-20 with high impressions but low CTR — natural candidates for your next batch of experiments.
If you want to detect organic ranking changes over time (not controlled experiments, but natural fluctuations), use the Ranking Changes module. It compares two time periods to find which pages gained or lost positions.
The R Code Behind the Analysis
Every report includes the exact R code used to produce the results — reproducible, auditable, and citable. This is not AI-generated code that changes every run. The same data produces the same analysis every time.
The analysis uses prop.test() for CTR significance testing per experiment, with Wilson confidence intervals for proportion comparisons. Effect sizes are computed as relative CTR lift (treatment minus control, divided by control). Portfolio metrics aggregate individual experiment results using weighted averages (weighted by impression volume). Pattern detection uses decision tree classification (rpart) on experiment metadata features to identify predictors of experiment success. ROI projections extrapolate observed click deltas over configurable time horizons. All code is visible in the report's code tab.