data_drift

Data Drift

Measures the degree to which each sample in a new dataset has drifted from the reference (training) distribution, detecting covariate shift at both the dataset and per-sample level.
Tags:

Data Quality

Overview

The Data Drift evaluation measures the degree to which samples in a new dataset have drifted from a reference distribution established at training time.

Metrics

Dataset Drift

The fraction of production samples that are consistent with the reference distribution (range: 0.0 to 1.0).

Dataset Drift
0.01.0
0.0
0.7
0.9
1.0
0.0The production distribution is entirely disjoint from the reference - the model is operating on completely unseen input types. Retraining is required.
0.730% of production samples have drifted - significant covariate shift. Model performance is likely degraded on a substantial fraction of production inputs.
0.910% of production samples have drifted - moderate shift. Investigate the drifted subpopulation to assess business impact before deciding on remediation.
1.0No drift detected - the production distribution is consistent with the reference distribution.

Motivation

Data drift occurs when the statistical distribution of model inputs changes between training and production. Even when the underlying labelling function is unchanged, a model trained on the original distribution can degrade substantially on drifted inputs, because its learned decision boundaries are calibrated to training-time patterns it no longer reliably encounters.

Drift accumulates silently: seasonal patterns, changes in user behaviour, upstream pipeline changes, or the natural evolution of a domain can all shift the input distribution without triggering any visible error. Without a systematic measure, degradation goes undetected until it manifests as measurable business impact.

This evaluation quantifies drift at the dataset level - what fraction of production samples have shifted outside the reference distribution - making it possible to decide whether to retrain, adjust thresholds, or route drifted inputs to a fallback system.

Methodology

  1. Samples: Each production sample is scored independently against the reference distribution.
  2. Scoring: Each sample is assessed by the Distribution Shift Scorer, which estimates the likelihood that the sample was drawn from the reference distribution using a density ratio estimator trained on reference and production samples. A score of 1.0 means the sample is well within the reference distribution; 0.0 means it falls in a region the reference distribution does not cover.

The dataset drift metric summarises the fraction of production samples that are in-distribution.

Scoring

Distribution Shift Scorer

Dataset Drift
Score valueExplanation
1.0The sample is consistent with the reference distribution - its density ratio is within the expected range for in-distribution samples.
0.5The sample lies in a low-density region of the reference distribution - marginal drift detected. The model may have limited training coverage for this type of input.
0.0The sample is outside the reference distribution - significant covariate shift detected. The model's predictions for this input are unreliable extrapolations.

Examples

In-distribution sample - no drift detected (passing)

Sample
queryWhat is the deadline to file a tax return?
sourceproduction
Distribution Shift Scorer
1.0The density ratio for this sample is 1.03 - consistent with an in-distribution sample. The reference distribution contains many similar tax-deadline queries. No covariate shift detected.

Out-of-distribution sample - new topic introduced after training cutoff (failing)

Sample
queryHow do I apply the new digital nomad visa tax exemption?
sourceproduction
Distribution Shift Scorer
0.0The density ratio for this sample is 0.04, far below the in-distribution threshold of 0.5. The reference distribution contains no queries about digital nomad visa tax rules, which were introduced after the training data cutoff. This is a clear covariate shift.

Run Evaluation in LatticeFlow AI Platform

Use the following CLI command to initialize and run the evaluation in LatticeFlow AI Platform.
Requires LatticeFlow AI Platform CLI
lf init --atlas data_drift

Metrics

Dataset Drift

Don't have the LatticeFlow AI Platform?

Contact us to see this evaluation in action:
Contact Us