Offline A/A Tests
We’ve made running A/A tests at scale easy by setting up simulated A/A tests that run every day in the background, for every company on the platform. An A/A test is like an A/B test - but both groups get the same experience. A/A tests help build trust in your experimentation platform (and your metrics!)
A/A tests can be Online or Offline. An Online A/A test is run on real users. An engineer instruments your app with the Statsig SDK to check for experiment assignment. Assignment is logged, but there's no difference in experience to the user.
Since there is no effect, you expect to only see statistical noise. When using 95% confidence intervals, only ~1 in 20 metrics will show a stat-sig difference between control and test.
Offline A/A tests
A single request runs on one unit type, and an offline A/A test works by
- Querying a representative sample of your data
- Randomly assigning subjects to Test or Control
- Computing relevant metrics for Test vs Control and running them through the stats engine
- You're looking for the % of false positives. If your p-value cutoff is 0.05 (typical), you'd expect a ~5% false positive rate.
You can download the running history of your simulated A/A test performance via the “Tools” menu in your Statsig Console. We run 100 tests per request.
File Description
Column Name | Description |
---|---|
metric_name | Name of the Metric |
metric_type | Type of Metric |
unit_type | The unit used to randomize (e.g. userID) |
n_tests | The number of tests run |
pct_ss_95_pct_confidence | The percentage of tests that have a stat-sig result for this metric |
avg_units_per_test | The number of units (often users) sampled into the A/A test |
avg_participating_units_per_test | The number of units in the test with a value for this metric |