Skip to main content

FAQ

SDKs and APIs

How does bucketing within the Statsig SDKs work?

Bucketing in Statsig is deterministic. Given the same user object and the same state of the experiment or feature gate, Statsig always returns the same result, even when evaluated on different platforms (client or server). Here's how it works:

  1. Salt Creation: Each experiment or feature gate rule generates a unique salt.
  2. Hashing: The user identifier (e.g., userId, organizationId) is passed through a SHA256 hashing function, combined with the salt, which produces a large integer.
  3. Bucket Assignment: The large integer is then subjected to a modulus operation with 10000 (or 1000 for layers), assigning the user to a bucket.
  4. Bucket Determination: The result defines the specific bucket out of 10000 (or 1000 for layers) where the user is placed.

This process ensures a randomized but deterministic bucketing of users across different experiments or feature gates. The unique salt per-experiment or feature gate rule ensures that the same user can be assigned to different buckets in different experiments. This also means that if you rollout a feature gate rule to 50% - then back to 0% - then back to 50%, the same 50% of users will be re-exposed, so long as you reuse the same rule - and not create a new one. See here.

A lot of times people assume that we keep track of a list of all ids and what group they were assigned to for experiments, or which IDs passed a certain feature gate. While our data pipelines keep track of which users were exposed to which experiment variant in order to generate experiment results, we do not cache previous evaluations and maintain distributed evaluation state across client and server sdks. That model doesn't scale - we've even talked to customers who were using an implementation like that in the past, and were paying more for a Redis instance to maintain that state than they ended up paying to use Statsig instead.

For more details, check our open-source SDKs here.


Is it possible to add a layer to a running experiment?

No. Once an experiment is started, you cannot change the layer. This restriction ensures the integrity of the experiment. We may support this feature in the future.


Can you change an experiment or gate name after creating it?

No. We've intentionally decided to not allow any Statsig config (Feature Gate, Experiment, Layer, etc.) to be renamed - as renaming a config that is already integrated in your code can have serious undesirable consequences. The exception to this is Metrics, which have display names not used in code.


Why should I define parameters for my experiments instead of just getting the group?

Defining parameters for experiments provides flexibility and speed in iteration. Many companies, such as Facebook, Uber, and Airbnb, follow this approach in their experimentation platforms because it allows:

  • Faster iteration (no code changes required for new experiments).
  • More flexible experiment designs.

For example:

Without Parameters (Group-based):

if (otherExpEngine.getExperiment('button_color_test').getGroup() === 'Control') {
color = 'BLACK';
} else if (otherExpEngine.getExperiment('button_color_test').getGroup() === 'Blue') {
color = 'BLUE';
}

With Parameters (Statsig approach):

color = statsig.getExperiment('button_color_test').getString('button_color', 'BLACK');

In the first case, adding a new color (e.g., "Green") requires a code change. In the second case, you can modify the experiment configuration without making a code change.


Why am I not seeing my exposures and custom events logged in Statsig?

In short-lived processes (e.g., scripts or edge workers), the process may exit before the event queue flushes to Statsig. To ensure that exposures and events are logged, call statsig.flush() before the process exits.

For details on flushing, check the Node.js Server SDK documentation.


I don't see my client or server language listed. Can I still use Statsig?

If none of our current SDKs meet your needs, please let us know via our Slack community!


How do I get all exposures for a user?

If you're interested in historical exposures, the console's users tab may serve your needs.

If you need all hypothetical assignments, you can consider using the getClientInitializeResponse server sdk method. Statsig's SDKs should ideally be invoked at the time you're serving an experiment treatment, so that an exposure can be logged. If that's not possible in your case (perhaps you need to pass assignment information to other applications, or to use assignment information as cache-keys for the CDN + edge), this approach could work.

Example of capturing all assignments in Node

Note, this method is designed to bootstrap client SDKs, and as such, will hash the experiment and feature keys returned in the payload, obfuscating their names for security. You can provide an optional hash parameter, allowing you to disable hashing and capture all values in plain text: Node, Python, Java, Go.

const assignments = statsig.getClientInitializeResponse(userObj, "client-key", {hash: "none"});

What happens if I check a config with a non-existent name?

You'll receive default values - false for feature flags, and the in-code defaults for experiments or layer parameters. You should expect to see "Unrecognized" evaluation reasons - see our Debugging Section. This behavior will be the same on a non-existent config vs. one that is deleted, one that is archived, or one that your current SDK instance can't see because of target apps.


Feature Flags

When I change the rollout percentage of a rule on a feature gate, will users who passed continue to pass?

Yes. If you increase the rollout percentage (e.g., from 10% to 20%), the original 10% will continue to pass, while an additional 10% will start passing. Reducing the percentage will restore the original 10%. The same behavior exists if you reduce then re-increase the pass percentage. To reshuffle users, you'll need to "resalt" the gate.

This is only true of the same "rule" per gate, if you create a new rule with the same pass percentage as another one, it will pass a different set of users.

Note - today, increasing the allocation percentage of an experiment is not guaranteed to behave the same as the above - if you'd like to have dependably deterministic allocations, we recommend using targeting gates.


Statistics

What statistical tests does Statsig use?

Statsig uses a two-sample Z test for most experiments and Welch’s t-test for smaller sample sizes. These methods are industry-standard and have been validated through simulations and research.


How does Statsig handle low sample size?

For small samples, we use Welch's t-test, which is more suitable for unequal group sizes or variances. We also support CUPED and winsorization to increase test power.


When should I use one-sided vs two-sided tests?

Use a one-sided test if you're confident that you're only interested in movement in a specific direction. This increases the power of the test but sacrifices insight into secondary metrics.


Experimentation

How can I get started with an A/B Test?

If your feature isn't live yet, you can start an A/B test using a feature flag. If it's already in production, you can create an experiment. Results can be viewed in the Pulse Results tab.


Can I target my experiment to a subset of users (e.g., iOS users only)?

Yes, you can. When setting up your experiment, select a Feature Flag with targeting rules. In this case, only iOS users would pass the gate.

Targeting iOS Users


Billing

What counts as a billable event?

Billable events occur when the Statsig SDK checks if a user is exposed to a feature flag or experiment or logs an event. Pre-computed metrics from data warehouses and custom metrics created from existing data also count as billable.


How do I manage my billable event volume?

  1. Download a CSV from the Usage and Billing tab to review events contributing to your volume.
  2. Create a pivot table in Excel to identify the top event volume drivers.
  3. Admins receive proactive alerts at 25/50/75/100% of their contracted events.

Billing Usage


How Many Projects Can I Create With a Single Pro Subscription?

Pro subscriptions are limited to one project each. You can create more projects within Statsig, but if you want to have access to pro features and 5M events, you will need to upgrade each project independently. You can read more about our Pro plans here.

Enterprise plans can support multiple projects. If you might be interest in this, contact us here!


Platform Usability

When should I create a new project?

Projects have distinct boundaries. If you're using the same userIDs and metrics across surfaces, apps or environments, put them in the same project. Create a new project when you're managing a separate product with unique user IDs and metrics.

For example, if you have a marketing website (anonymous users) and a product (signed-in users), you may want to separate them. However, if you want to track success across both you should manage them in the same project. (e.g. from user signup on the marketing website to user engagement within the product)

Some reasons to NOT create a new project

  • to segregate by environment. Statsig has rich support for environments - you can even customize these. You can turn features or experiments on and off by environment.
  • to segregate by platform. If you have an iOS app and Web app - it's helpful to have both collect data in the same project and capture metadata on platform. This lets you look at data by platform, but also understand if you've increased the overall metric - or just cannibalized users (pushed the same users from platform to the other platform).