Skip to main content

Getting Started with OTEL + Statsig

This guide helps you setup and send OpenTelemetry telemetry to Statsig so you can use Infra Analytics (Logs Explorer, Metrics Explorer, Alerts).

There are two common paths:

  • Kubernetes/OpenTelemetry Collector: scrape logs and metrics from your cluster and export to Statsig. See Open Telemetry Logs and Metrics for a more complete guide.
  • Applications: export traces, metrics, and logs directly from your app over OTLP/HTTP to Statsig or to your collector. See the quick starts below.
Endpoint & Auth
  • Endpoint: https://api.statsig.com/otlp
  • Auth header: statsig-api-key: <your Server SDK Secret key>

Application Telemetry quick starts

Install dependencies:

npm install --save \
@opentelemetry/sdk-node \
@opentelemetry/auto-instrumentations-node \
@opentelemetry/exporter-trace-otlp-http \
@opentelemetry/exporter-metrics-otlp-http \
@opentelemetry/api-logs \
@opentelemetry/sdk-logs \
@opentelemetry/exporter-logs-otlp-http \
@opentelemetry/resources \
@opentelemetry/semantic-conventions

Initialize OpenTelemetry (e.g., instrumentation.js):

// instrumentation.js
const { NodeSDK } = require('@opentelemetry/sdk-node');
const { resourceFromAttributes } = require('@opentelemetry/resources');
const { ATTR_SERVICE_NAME, ATTR_SERVICE_VERSION } = require('@opentelemetry/semantic-conventions');

// import if you want to enable traces
const { OTLPTraceExporter } = require('@opentelemetry/exporter-trace-otlp-http');
const { OTLPMetricExporter } = require('@opentelemetry/exporter-metrics-otlp-http');
const { PeriodicExportingMetricReader } = require('@opentelemetry/sdk-metrics');
const { getNodeAutoInstrumentations } = require('@opentelemetry/auto-instrumentations-node');

// For troubleshooting, set the log level to DiagLogLevel.DEBUG
// const { diag, DiagConsoleLogger, DiagLogLevel } = require('@opentelemetry/api');
// diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.DEBUG);

const statsigKey = process.env.STATSIG_SERVER_SDK_SECRET;
const headers = { 'statsig-api-key': statsigKey ?? '' };

const sdk = new NodeSDK({
resource: resourceFromAttributes({
[ATTR_SERVICE_NAME]: process.env.OTEL_SERVICE_NAME || 'statsig-node-service',
[ATTR_SERVICE_VERSION]: process.env.VERSION || '1',
env: process.env.NODE_ENV || 'development',
}),
// Optional: enable traces if you want to try out tracing
// traceExporter: new OTLPTraceExporter({
// url: 'https://api.statsig.com/otlp/v1/traces',
// headers,
// }),
metricReader: new PeriodicExportingMetricReader({
exporter: new OTLPMetricExporter({
url: 'https://api.statsig.com/otlp/v1/metrics',
// or
// url: <your-collector-endpoint>/v1/metrics
headers,
}),
exportIntervalMillis: 60000,
}),
instrumentations: [getNodeAutoInstrumentations()],
});

sdk.start();

To set up application logs with OTel, you can use the pino or winston bridges. The example below using Pino with Pino auto instrumentation.

Install the pino instrumentation:

npm i pino @opentelemetry/instrumentation-pino
// instrumentation.js (continued)
const { BatchLogRecordProcessor } = require('@opentelemetry/sdk-logs');

const statsigKey = process.env.STATSIG_SERVER_SDK_SECRET;
const headers = { 'statsig-api-key': statsigKey ?? '' };

const sdk = new NodeSDK({
// ... other config ...
logRecordProcessors: [
new BatchLogRecordProcessor(
new OTLPLogExporter({
url: 'https://api.statsig.com/otlp/v1/logs',
// or
// url: <your-collector-endpoint>/v1/logs
headers
})
),
],
instrumentations: [getNodeAutoInstrumentations(), new PinoInstrumentation()],
});


// in your application code, e.g., app.js
const pino = require('pino');

const logger = pino();

logger.info('OTel logs initialized');

The Statsig SDK also supports forwarding logs to Log Explorer; see the alternative logging example below.

// Requires: npm i @statsig/statsig-node-core
const { Statsig, StatsigUser } = require('@statsig/statsig-node-core');

const s = new Statsig(process.env.STATSIG_SERVER_SDK_SECRET);
await s.initialize();

const user = new StatsigUser({
userID: 'a-user',
custom: { service: process.env.OTEL_SERVICE_NAME || 'my-node-service' },
});

// levels: trace, debug, info, log, warn, error
s.forwardLogLineEvent(user, 'info', 'service started', { version: process.env.npm_package_version });

try {
// your app code
} catch (err) {
s.forwardLogLineEvent(user, 'error', 'unhandled error', {
message: String(err?.message || err),
stack: err?.stack,
});
}

Run your service:

make sure that you require or import instrumentation.js before any other application code to ensure instrumentation is set up correctly.

STATSIG_SERVER_SDK_SECRET=YOUR_SECRET \
OTEL_SERVICE_NAME=my-node-service \
node -r ./instrumentation.js app.js

Tip: you can configure exporters via env instead of code:

  • OTEL_EXPORTER_OTLP_ENDPOINT=https://api.statsig.com/otlp
  • OTEL_EXPORTER_OTLP_HEADERS=statsig-api-key=${STATSIG_SERVER_SDK_SECRET}
  • OTEL_EXPORTER_OTLP_PROTOCOL=http/json

Collector quick starts

Running a Collector is optional but recommended for production workloads.

Use the OpenTelemetry Collector as a gateway to receive OTLP from your applications and forward to Statsig. This is useful if you want to centralize telemetry collection, add advanced sampling methods like tail-based sampling, or scrape logs/metrics from hosts or Kubernetes.

Create a minimal values.yaml for the OpenTelemetry Collector that forwards all signals (traces, metrics, logs) to Statsig:

values.yaml
config:
receivers:
otlp:
protocols:
http:
grpc:

processors:
batch: {}

exporters:
otlphttp:
endpoint: https://api.statsig.com/otlp
encoding: json
headers:
statsig-api-key: ${env:STATSIG_SERVER_SDK_SECRET}

service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp]
logs:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp]

Install the Collector with Helm:

helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm repo update
helm install otel-gateway open-telemetry/opentelemetry-collector \
-n otel --create-namespace \
-f values.yaml

Provide the Statsig key as an environment variable to the Collector pods (for example via a Secret and envFrom). Your applications then send OTLP to the in-cluster Collector endpoint (for example http://otel-gateway-collector.otel.svc.cluster.local:4318).

For a production setup that also scrapes Kubernetes logs/metrics, see the full guide: Open Telemetry Logs and Metrics.

Version requirement

The encoding: json option in the OTLP HTTP exporter requires Collector v0.95.0 or newer. If you pin the image via Helm values, set image.tag: "0.95.0" (or newer).


Common Collector Configs (K8s & Docker)

The following examples show popular receivers/processors you can enable in your Collector and still export to Statsig via the same otlphttp exporter.

Note: These components live in the contrib distribution. Use an image that includes them:

  • Docker: otel/opentelemetry-collector-contrib or newer
  • Helm: set image.repository: otel/opentelemetry-collector-contrib (and a compatible image.tag)

Helm values (contrib image):

values.yaml
image:
repository: otel/opentelemetry-collector-contrib
tag: "latest"
pullPolicy: IfNotPresent

A. File logs (filelog receiver)

Reads and parses logs from files on disk. Useful for hosts, containers, or Kubernetes nodes.

Minimal example:

receivers:
filelog:
include: [ /var/log/myservice/*.json ]
start_at: beginning
operators:
- type: json_parser
timestamp:
parse_from: attributes.time
layout: '%Y-%m-%dT%H:%M:%S%z'

processors:
batch: {}

exporters:
otlphttp:
endpoint: https://api.statsig.com/otlp
encoding: json
headers:
statsig-api-key: ${env:STATSIG_SERVER_SDK_SECRET}

service:
pipelines:
logs:
receivers: [filelog]
processors: [batch]
exporters: [otlphttp]

Kubernetes tip: to tail container logs on nodes, mount host paths (e.g., /var/log/pods and /var/lib/docker/containers) into the Collector DaemonSet and set include to those paths.

B. EC2 resource detection (resourcedetection processor)

Automatically adds AWS EC2 metadata (cloud provider, region/zone, instance id) to your telemetry.

processors:
resourcedetection/ec2:
detectors: [env, ec2]
timeout: 2s
override: false

service:
pipelines:
traces:
receivers: [otlp]
processors: [resourcedetection/ec2, batch]
exporters: [otlphttp]
metrics:
receivers: [otlp]
processors: [resourcedetection/ec2, batch]
exporters: [otlphttp]
logs:
receivers: [otlp]
processors: [resourcedetection/ec2, batch]
exporters: [otlphttp]

Permissions: the Collector must be able to reach the EC2 metadata service (IMDS). Ensure network access to 169.254.169.254 and IMDSv2 where required.

C. Docker container metrics (docker_stats receiver)

Emits container CPU, memory, network, and block IO metrics by querying the Docker daemon.

receivers:
docker_stats:
endpoint: unix:///var/run/docker.sock
collection_interval: 15s

processors:
batch: {}

exporters:
otlphttp:
endpoint: https://api.statsig.com/otlp
encoding: json
headers:
statsig-api-key: ${env:STATSIG_SERVER_SDK_SECRET}

service:
pipelines:
metrics:
receivers: [docker_stats]
processors: [batch]
exporters: [otlphttp]

Requirements:

  • Linux only (not supported on darwin/windows).
  • Mount the Docker socket into the Collector container: /var/run/docker.sock.

Resources