You changed a title tag to improve click-through rate. Did it work — or did you just get lucky with a ranking shift? SEO Title A/B Testing takes your Google Search Console data from before and after the change and applies proper statistical testing, including position-adjusted CTR analysis that separates the title effect from ranking movements. Upload two CSV exports (control period and treatment period) and get a definitive verdict with forest plots, confidence intervals, and p-values.
What Is SEO Title A/B Testing?
Traditional A/B testing shows different variants to different visitors simultaneously and compares conversion rates. SEO title testing is different because you cannot show two different title tags to Google at the same time — you change the title and compare the before period (control) against the after period (treatment). This introduces a fundamental problem: other things change between the two periods. Your ranking might shift, search volume might fluctuate seasonally, or a competitor might enter or exit the SERP. Any of these could change your CTR independently of your title change.
SEO Title A/B Testing addresses this with position-adjusted CTR analysis. Raw CTR naturally varies with ranking position — position 1 gets roughly 30% CTR while position 5 gets roughly 5%. If your page moved from position 5 to position 3 during the treatment period, your CTR would increase even if your title had zero effect. The position adjustment computes what your expected CTR should be given the position in each period, then measures whether the actual CTR deviated from that expectation. This isolates the title effect from the position effect.
The analysis handles multiple pages and queries simultaneously. If you changed titles on several pages, or if you want to analyze results at the query level (where the same page might rank differently for different queries), the module processes all of them and produces per-page and per-query verdicts plus an aggregate portfolio result. The forest plot visualization shows all experiments on a single chart with their confidence intervals, making it immediately clear which changes produced real effects.
When to Use SEO Title A/B Testing
Run this analysis 2-4 weeks after changing a title tag. Shorter periods may not have enough impression data for reliable statistical conclusions. Longer periods increase the risk of confounding factors — algorithm updates, seasonal shifts, competitor changes — contaminating your results.
The ideal workflow is: identify pages with suboptimal CTR using the Quick Wins module, craft improved title tags, deploy them, wait 2-3 weeks, then export GSC data for the control period (same duration before the change) and treatment period (after the change) and run this analysis. This creates a disciplined experiment cycle where every title change gets measured and every result informs the next batch of changes.
This module is also valuable for validating SEO copy at scale. If your content team regularly updates title tags as part of content refreshes, running Title A/B Testing after each batch quantifies whether those changes improved search performance. Over time, you develop an empirical understanding of what title patterns work for your audience — questions vs. statements, including numbers vs. not, brand name first vs. last.
What Data Do You Need?
This is a multi-dataset module requiring two CSV files:
Control dataset: GSC page-level data from the period before your title change. Required columns: date, page (URL), variant (label like "control"), experiment_id (identifier for the experiment), impressions, clicks, ctr, avg_position, position_adjusted_ctr, and top_query (the highest-impression query for that page).
Treatment dataset: Same columns, same pages, from the period after your title change. The variant column should be labeled "treatment" and the experiment_id should match the control dataset for the same pages.
If you do not have position_adjusted_ctr pre-computed, the module can compute it for you when the position_adjustment parameter is set to True (the default). It uses a standard position-to-expected-CTR curve based on published CTR-by-position benchmarks and calculates the residual: actual CTR minus expected CTR at that position.
Additional parameters: confidence_level (default 0.95 — the statistical confidence threshold), min_impressions (default 10 — experiments with fewer impressions are flagged as insufficient data), and decision_threshold (default 0.05 — the p-value threshold for declaring a result statistically significant).
How to Read the Report
The report contains nine sections designed to give you a clear verdict and the evidence behind it:
Forest Plot. The signature visualization of this report. Every experiment (page or query) is shown as a horizontal line with a point estimate and confidence interval. If the confidence interval crosses zero, the result is inconclusive. If the entire interval is above zero, the title change increased CTR. If it is below zero, the change hurt CTR. The vertical dashed line at zero represents "no effect." You can read the entire portfolio's results at a glance.
Raw Comparison. Side-by-side metrics for control vs. treatment: impressions, clicks, CTR, and position for each page. No statistical adjustment — just the raw numbers. This is where you verify that the data makes sense before trusting the statistical analysis. If impressions dropped by 90% in the treatment period, something external happened (seasonal drop, deindexing, etc.) and the experiment may not be valid.
Position Scatter. A scatter plot showing the relationship between position change and CTR change for each experiment. This visualizes the confounding problem: pages that moved up in position tend to have higher CTR regardless of title changes. The position-adjusted analysis accounts for this, but the scatter plot helps you see the raw relationship and identify outliers.
Adjusted Detail. The position-adjusted results table. For each experiment, it shows the raw CTR change, the expected CTR change based on position movement alone, and the residual — the CTR change attributable to the title, with position effect removed. This is the most statistically rigorous section of the report.
Statistical Summary. Aggregate statistics across all experiments: overall win rate, average effect size, median confidence interval width, and portfolio-level significance test. This section answers "did our title changes work as a program?" rather than individually.
Recommendations. AI-generated action items based on the results — which title changes to keep, which to revert, and what patterns from winning titles to apply to future experiments.
TL;DR. Executive summary with the headline verdict and top findings.
When to Use Something Else
If you are evaluating a batch of many experiments at once and want portfolio-level metrics (win rate, aggregate ROI, pattern detection across experiments), use the GSC Portfolio Analysis module. It is optimized for batch evaluation, while Title A/B Test is optimized for deep single-experiment analysis.
If you have not run any experiments yet and want to identify which pages are the best candidates for title optimization, start with the GSC Quick Wins module. It finds pages ranking in positions 4-20 with high impressions but below-expected CTR — the ideal targets for title experiments.
If you want to track ranking changes over time without running controlled experiments, use the Ranking Changes module. It detects which pages gained or lost positions between two time periods, without the experiment framework.
The R Code Behind the Analysis
Every report includes the exact R code used to produce the results — reproducible, auditable, and citable. This is not AI-generated code that changes every run. The same data produces the same analysis every time.
The analysis uses prop.test() for CTR proportion tests with Wilson confidence intervals. Position adjustment uses a log-linear model fitted to the observed position-CTR relationship in your data, then computes residuals as the difference between actual and expected CTR at each position. The forest plot is rendered with ggplot2 using geom_pointrange() for confidence intervals. Portfolio-level significance uses a meta-analytic random-effects model when multiple experiments are present. All computations, including the position-CTR curve fitting, are visible in the code tab.