Skip to main content

Running a Proof of Concept

Introduction

At Statsig, we encourage customers to try us out by running a proof of concept (POC) or a limited test. Typically, this involves implementing 1-2 low-risk features or experiments. This guide suggests some planning steps to assist in running an effective proof of concept.

Steps to Running an Effective Proof of Concept with Statsig

Running a meaningful proof of concept requires some planning.

POC Timeline

1. Plan: Define Your Overall Goals & Measures of Success

First, determine why you're creating a POC and how you'll measure its success. A solution is only as good as the problem it solves. Make sure you target effective success measures.

Examples:

  • “We want to start building out an experimentation platform and have a few ideas for early test experiments."
  • “Our internal feature flagging tool doesn’t have scheduled rollouts, and we can't measure the impact. We'd like to test Statsig’s feature flagging solution on our new web service.”
  • “I want to see if Statsig’s stats engine will free up data scientist time for more sophisticated analysis.”

Once you’ve set your goals, target 1-2 low-risk features or experiments to validate the platform.

Examples:

  • “I want to test two new website layouts against the current one through an A/B/C test to validate an increase in user engagement.”
  • “We want to roll out a new search algorithm and measure its performance impact.”
  • “We’ve run experiments before and want to validate their results using your stats engine.”

2. Phase 0: Scope & Prepare Your POC

Now that you have clear goals and success measures, it's time to plan out how to achieve them.

When choosing what to POC, consider:

  • What path allows for easy Statsig implementation while testing the necessary capabilities?
  • What does your near-term experimentation roadmap look like?
  • Are there common testing areas in your app that would be a good fit?

Who will work on this POC?

Statsig is collaborative, involving data, engineering, and product stakeholders. In smaller organizations, one person may handle multiple roles.

Typical Role Breakdown:

  • Engineer/Data: Responsible for implementation and orchestration, including SDK integration, data pipelining, and analysis.
  • Product: In charge of planning and executing features/experiments and validating test results.

Consider the Tech Stack: Make sure the parts of your tech stack that will host the solution are supported by Statsig, whether client, server, or mobile applications.

Define the Timeline: We recommend a timeline of 2-4 weeks for the proof of concept.

Example Timeline:

  • Weeks 1-2:

    • Instrument the Statsig SDK into your app.
    • Set up metric ingestion pipelines.
    • Begin rollouts.
  • Weeks 3-4:

    • Finalize rollouts.
    • Collect data.
    • Analyze results.
    • Finalize the POC.

Get Help When You Need It: Join our Slack community to interact with engineers and data scientists, or browse our documentation.

Sandboxing: Sign up for a free account to start tinkering with the platform. Here's a list of recommended evaluation exercises:

3. Phase 1: Implementation Steps

Now that you're familiar with the platform, it's time to implement Statsig.

Here’s a general overview of the Statsig platform:

architecture-w-statsig

4. Phase 2: Rollout & Test

Deploy your changes through your internal CI/CD pipeline or equivalent. After deployment, monitor exposure data via the diagnostics tabs.

For more information, see:

5. Phase 3: Collect Data & Read Results

After rolling out Statsig, wait for 2 weeks for data collection. Use the metrics dashboard to track progress, and ensure your health checks are passing.

You can:

6. Phase 4: Validate Statsig’s Engine

We encourage you to validate the results by exporting the data and comparing it with your internal tools.

Next Steps

If your evaluation shows that Statsig is a good fit, follow our guide to roll out Statsig to production.