Your support team resolves hundreds of tickets a month, but are they meeting SLA targets? Which priority levels have the highest breach rates? Which ticket types take the longest to resolve? Which channels perform best? This module takes your ticket data and produces a complete SLA compliance report — breach rates by priority, resolution time distributions, channel comparisons, bottleneck rankings, and statistical tests — all from a single CSV export of your help desk system.
What Is SLA Analysis?
SLA analysis measures how well your support team meets its Service Level Agreements — the commitments you make about how quickly tickets will be resolved based on their priority. A typical SLA might say: Critical tickets must be resolved within 4 hours, High within 8 hours, Medium within 24 hours, and Low within 48 hours. SLA analysis answers the question: are you actually meeting those targets, and where are you falling short?
The core metric is the breach rate — the percentage of tickets that exceeded their SLA target resolution time. A breach rate of 15% means one in seven tickets missed the target. But the overall breach rate is just the starting point. The real value comes from breaking it down: is the 15% concentrated in Critical tickets (a serious operational issue) or Low-priority tickets (a staffing allocation question)? Is it concentrated in one ticket type (a process issue) or one channel (a tooling issue)?
Beyond breach rates, the analysis examines the full distribution of resolution times. Averages can be misleading — if half your tickets resolve in 1 hour and the other half in 47 hours, the average of 24 hours looks fine but masks a bimodal problem. The resolution time distribution reveals these patterns. The P90 (90th percentile) is especially valuable: it tells you the time within which 90% of tickets are resolved, giving a realistic worst-case expectation that is more robust than the mean.
For example, a software company might discover that their overall SLA compliance is 82%, which sounds acceptable. But the SLA analysis reveals that Critical ticket compliance is only 60% — four out of ten critical issues miss the 4-hour window. The bottleneck analysis shows that "integration bugs" and "security incidents" are the two categories driving most of the critical breaches, suggesting the team needs deeper expertise in those areas rather than more headcount across the board.
When to Use SLA Analysis
Run this analysis whenever you want to evaluate support team performance, justify staffing changes, or identify process improvements. The most impactful use cases are:
Monthly or quarterly SLA reviews: Export your help desk data, upload it, and get a comprehensive compliance report. Use it for team retrospectives, executive reporting, or customer-facing SLA compliance documentation. The report's charts and statistics are ready to present without additional formatting.
Identifying bottlenecks: The bottleneck ranking slide scores each category (ticket type, priority level, channel) by impact — the combination of breach rate and volume. A category with a 50% breach rate but only 10 tickets is less urgent than one with a 20% breach rate and 500 tickets. The impact score helps you prioritize where process improvements will move the needle most.
Channel optimization: If you offer support through multiple channels (email, chat, phone, self-service portal), the channel comparison analysis shows which channels resolve tickets fastest, which have the highest breach rates, and whether the differences are statistically significant. This drives decisions about channel investment and routing strategies.
Staffing justification: When requesting additional headcount, data beats anecdotes. Showing that Critical ticket breach rates have climbed from 8% to 22% over two quarters, driven by a specific ticket type that increased in volume, makes a compelling case. The statistical tests in the report confirm whether performance differences are real or within normal variation.
Customer satisfaction correlation: If your help desk data includes satisfaction ratings, the module correlates resolution time with customer satisfaction, showing the quantitative relationship between speed and happiness. This is powerful for setting SLA targets that are tied to business outcomes rather than arbitrary benchmarks.
What Data Do You Need?
You need a CSV with at least 20 resolved tickets and four columns:
Required: ticket_id — a unique identifier for each ticket. ticket_priority — the priority level (Critical, High, Medium, Low, or your own priority scheme). resolution_timestamp — when the ticket was resolved (datetime format). response_timestamp — when the ticket was first responded to or created (datetime format). The resolution time is calculated as the difference between these two timestamps.
Optional (for richer analysis): ticket_type — the category of ticket (Bug, Feature Request, Billing Question, etc.). Enables breach rate breakdown by type. ticket_status — current status (Resolved, Closed, Open). Used for filtering. ticket_channel — the support channel (Email, Chat, Phone, Portal). Enables channel performance comparison. product_category — the product or service area. Enables product-level SLA analysis. satisfaction_rating — customer satisfaction score (typically 1-5). Enables satisfaction vs. resolution time correlation analysis.
The SLA thresholds default to 4, 8, 24, and 48 hours (matching Critical, High, Medium, and Low priorities). You can customize these to match your actual SLA targets. The time unit defaults to hours but can be changed. Resolution time bins default to 0, 1, 4, 8, 24, 48, 72, and 168 hours (one week) for the distribution chart.
The module works with exports from any help desk platform — Zendesk, Freshdesk, Jira Service Management, ServiceNow, Intercom, Help Scout, or any system that can export to CSV. The column names do not need to match exactly; you map your column names to the expected fields during upload.
How to Read the Report
The report is structured as a progressive drill-down from high-level KPIs to specific bottlenecks.
The Overview and Data Pipeline slides show total tickets analyzed, date range, and preprocessing steps. The Executive Summary follows immediately, distilling the key findings into actionable headlines.
The SLA Performance KPIs table shows overall metrics: total tickets, overall breach rate, median resolution time, P90 resolution time, and per-priority metrics. This is your dashboard view — if the overall breach rate is below your target, you may not need to dig deeper. If it is above target, the subsequent slides tell you where to focus.
The Resolution Time Distribution bar chart buckets tickets by resolution time (0-1 hours, 1-4 hours, 4-8 hours, etc.) with percentage labels. This reveals whether your resolution times cluster tightly around a norm (good) or spread widely (inconsistent process). A long tail to the right means some tickets take dramatically longer than others — investigate those outliers.
The Performance by Priority chart shows breach rate and average resolution time side by side for each priority level. In a well-functioning support operation, Critical tickets should have the lowest resolution time and the lowest breach rate. If Critical tickets have a higher breach rate than Medium tickets, something is wrong — either the SLA targets are unrealistic, the escalation process is broken, or critical tickets are being misclassified.
The Breach Rate by Ticket Type chart ranks ticket types by breach rate. Types with both high breach rates and high ticket counts are the top improvement opportunities. The Channel Performance chart (if channel data is provided) does the same comparison across support channels.
The Satisfaction vs Resolution Time chart (if satisfaction data is provided) plots average satisfaction against resolution time buckets. You typically see a clear negative relationship: faster resolution leads to higher satisfaction. The inflection point — where satisfaction drops sharply — tells you the maximum resolution time customers will tolerate before their experience degrades.
The Statistical Tests table reports whether the performance differences across priorities, types, or channels are statistically significant. A significant result (p < 0.05) means the differences are unlikely to be random variation.
The Top Bottlenecks by Impact chart is the action-oriented capstone. It scores each category by combining breach rate with ticket volume into a single impact score. The highest-impact bottlenecks are your top priorities for process improvement, training, or tooling investment.
When to Use Something Else
If you want to predict which tickets will breach their SLA before they do (rather than analyzing after the fact), you need a predictive model like XGBoost or logistic classification trained on ticket features and historical breach outcomes.
If your primary interest is forecasting future ticket volume (how many tickets will we receive next month?), use the time series forecasting module with daily ticket counts as the value column.
If you want to compare just two channels or two time periods (before/after a process change) rather than a full SLA analysis, a t-test on resolution times gives a focused answer.
If your ticket data has many categorical dimensions and you want to understand which factors most influence resolution time, a Random Forest feature importance analysis can rank predictors by their contribution to resolution time variance.
The R Code Behind the Analysis
Every report includes the exact R code used to produce the results — reproducible, auditable, and citable. This is not AI-generated code that changes every run. The same data produces the same analysis every time.
The analysis uses difftime() for resolution time calculations, cut() for time bucket binning, and aggregate() for group-level statistics. Breach rates are computed by comparing resolution times against configurable SLA thresholds. Statistical tests use kruskal.test() for comparing resolution times across groups (robust to the skewed distributions typical of resolution time data). Impact scores combine breach rate and volume using weighted ranking. Satisfaction correlation uses cor() and grouped aggregation. All visualizations use Plotly for interactive charts. Every step is visible in the code tab of your report, so you or an analyst can verify exactly what was done.