Code Appendix

Every report ships
with the source code.

Scroll to the bottom of any report. The exact R script that generated the numbers is sitting there. Copy it, run it locally, audit it line by line.

Why this matters

If you can't see the code, you can't trust the result. Every cell in your report has an audit trail back to the exact line of R that produced it.

🔍

Verifiable

Anyone can re-run the code on the same data and check the output. Not "trust us" — actually verify, line by line, in your own R session.

📋

Auditable

Show your boss, your reviewer, your auditor. The methodology is right there in the report — no separate documentation, no chasing the analyst who left.

📌

Persistent

The code is part of the report. Not in a chat session, not in a notebook on someone's laptop, not on a server that goes away. In the report, forever.

This is what's in your report

Real excerpt from a real production module. Notice the literate comments explaining the methodology, the parameter setup, the edge case handling — all of this ships in every report appendix.

#' # Telecom Customer Churn & Retention Analysis #' Uses **binary logistic regression** to model the probability that a #' customer churns, and surfaces the key drivers — contract type, tenure, #' monthly charges, and service add-ons — with odds ratios and a ranked #' feature-importance view. #' #' ## Why Logistic Regression? #' Logistic regression produces directly interpretable odds ratios for #' each predictor, making it ideal for explaining churn to non-technical #' stakeholders. compute_shared <- function(df, params) { #' ### Step 1: Parameter Setup and Data Cleaning confidence_level <- params$confidence_level %||% 0.95 top_n <- as.integer(params$top_n_features %||% 10L) # Coerce total_charges — may arrive as character with blank strings df$total_charges <- suppressWarnings(as.numeric(df$total_charges)) df <- df[!is.na(df$monthly_charges) & !is.na(df$tenure), ] df$churn_binary <- as.integer(trimws(as.character(df$churn)) == "Yes") #' ### Step 3: Logistic Regression Model #' We fit a binary logistic regression with 15 predictors. The model is #' wrapped in `tryCatch()` to handle edge cases gracefully. model <- tryCatch( glm(churn ~ ., data = model_df, family = binomial(link = "logit")), error = function(e) { message("Logistic regression failed: ", e$message); NULL } ) ... }

From analytics__telecom__churn__customer_retention — full file: 584 lines, ships in every report appendix.

How to find it in your report

The code appendix is at the bottom of every report. Three steps:

1

Open any report

Run an analysis or open a sample report. The interactive HTML loads in your browser.

2

Scroll to the bottom

Past all the charts and interpretations, the "Show the Code" appendix is the final section.

3

Copy or expand

Click any function to expand it. Copy the code into your own R session and run it on the same data.

Standards we hold every module to

Every module that ships through the build pipeline is checked against these standards before it can deploy. The verifier reads the source and rejects it if any of these are missing.

  • Literate #' comments Every step of the analysis has prose explaining what it does and why — not just code, but a guided walkthrough of the methodology.
  • Edge case guards on every model fit Zero-variance columns, empty data frames, model fitting failures — all handled with tryCatch() instead of crashing.
  • No cat() in module code Standard output is reserved for the JSON return pipe. Debug messages use message() so they don't corrupt the report payload.
  • seq_len() not 1:n When n is zero, 1:0 gives c(1, 0) — two iterations over data that doesn't exist. seq_len(0) gives an empty vector. Small fix, big difference.
  • No hardcoded column names Modules use semantic column names from the column mapping. They work on any user's data, not just the test fixture.
  • Each function does one thing A compute_shared() function does the heavy stats once. Card functions read from it. No duplication, no side effects.

See the code in a real report

Open any sample report and scroll to the appendix. Or run your own CSV and read the source for your specific analysis.

Analyze your CSV →