Preventing Cross-Experiment Contamination in Convert's JavaScript SDK: A Practical Guide
📘 Learn how to implement reliable strategies using Convert's JavaScript SDK to ensure that visitors are only bucketed into a single experiment at a time—protecting the integrity of your data and the accuracy of your A/B testing results.
🚀 THIS ARTICLE WILL HELP YOU:
🧭 Introduction
When running multiple A/B tests on your website or application, maintaining experimental integrity is crucial. One significant challenge is preventing cross-experiment contamination—ensuring that visitors who have been included in one experiment aren't bucketed into another, which could skew results and lead to unreliable conclusions.
This article presents validated approaches to implement mutual exclusivity between experiments using Convert's JavaScript SDK.
⚠️ The Problem with Cross-Experiment Contamination
Cross-experiment contamination creates several issues in your testing program:
-
Interaction Effects: Multiple treatments can interact in unpredictable ways, making it impossible to isolate the impact of individual changes.
-
Statistical Validity: When visitors participate in multiple experiments, you can't determine which experiment caused which outcome.
-
User Experience Inconsistency: Users exposed to multiple experimental variations might encounter conflicting or jarring experiences.
-
Diluted Sample Size: With overlapping populations, your effective sample size for each experiment diminishes.
✅ Validated Approaches to Prevent Cross-Contamination
After careful analysis, here are the most reliable methods to prevent visitors from being included in multiple experiments:
🎯 Approach 1: Platform-Level Audience Targeting (Recommended)
This approach leverages Convert's audience targeting capabilities to exclude visitors who have already participated in any experiment.
Implementation Steps:
Set up event listeners to track experiment participation:
// Initialize the SDK
const convertSDK = new ConvertSDK({
sdkKey: 'your-sdk-key'
});
await convertSDK.onReady();
// Create user context
const visitorId = 'unique-visitor-id'; // Your method to get consistent ID
const userContext = convertSDK.createContext(visitorId);
// Set up listener for bucketing events
convertSDK.on('bucketing', async (data) => {
if (data.visitorId === visitorId) {
// Update visitor properties to indicate experiment participation
await userContext.updateVisitorProperties(visitorId, {
in_any_experiment: true,
[`in_experiment_${data.experienceKey}`]: true,
last_experiment_timestamp: Date.now()
});
console.log('Visitor properties updated to track participation in:', data.experienceKey);
}
});
Configure your experiments in Convert's dashboard:
-
Primary experiment: No additional audience rules (can target anyone).
-
Secondary experiments: Add an audience rule that requires
"inanyexperiment"
does NOT equal true.
Advantages:
-
Exclusion logic is handled at the platform level
-
Simpler to implement and maintain
-
Avoids race conditions in your code
🛠️ Approach 2: Client-Side Checking Before Running Experiences
For more granular control, implement client-side checks before running each experience.
// Initialize SDK with a persistent DataStore for cross-session consistency
class PersistentDataStore {
async get(key) {
return JSON.parse(localStorage.getItem(key) || 'null');
}
async set(key, value) {
localStorage.setItem(key, JSON.stringify(value));
return true;
}
}
const convertSDK = new ConvertSDK({
sdkKey: 'your-sdk-key',
dataStore: new PersistentDataStore() // Optional, for cross-session persistence
});
await convertSDK.onReady();
const visitorId = 'unique-visitor-id';
const userContext = convertSDK.createContext(visitorId);
// Function to check if visitor has participated in any experiment
async function hasParticipatedInExperiment() {
const props = userContext.visitorProperties || {};
if (props.in_any_experiment === true) return true;
const historyKey = `convert_experiment_history_${visitorId}`;
const history = await convertSDK.dataStore.get(historyKey);
return history && history.experiments && history.experiments.length > 0;
}
// Function to record experiment participation
async function recordExperimentParticipation(experienceKey, variationKey) {
await userContext.updateVisitorProperties(visitorId, {
in_any_experiment: true,
[`in_experiment_${experienceKey}`]: true,
last_experiment_timestamp: Date.now()
});
if (convertSDK.dataStore) {
const historyKey = `convert_experiment_history_${visitorId}`;
let history = await convertSDK.dataStore.get(historyKey) || { experiments: [] };
history.experiments.push({
experienceKey,
variationKey,
timestamp: Date.now()
});
await convertSDK.dataStore.set(historyKey, history);
}
}
// Function to run an experience with exclusion logic
async function runExclusiveExperience(experienceKey, attributes = {}) {
const hasParticipated = await hasParticipatedInExperiment();
if (hasParticipated) {
console.log('Visitor already in an experiment. Excluding from:', experienceKey);
return null;
}
const variation = await userContext.runExperience(experienceKey, attributes);
if (variation) {
await recordExperimentParticipation(experienceKey, variation.key);
console.log('Visitor bucketed into experiment:', experienceKey);
}
return variation;
}
// Example usage
const variation = await runExclusiveExperience('homepage-redesign', {
locationProperties: {
path: '/home'
}
});
if (variation) {
applyVariation(variation);
} else {
showDefaultExperience();
}
🔄 Approach 3: Hybrid Solution Using Both Methods
For maximum reliability:
-
Implement client-side checks and
-
Configure audience rules in Convert's dashboard
-
Update visitor properties in both places
🧠 Advanced Considerations
Time-Based Exclusion Policies
Allow visitors to participate again after a set period:
async function canParticipateInNewExperiment() {
const props = userContext.visitorProperties || {};
const lastTimestamp = props.last_experiment_timestamp;
if (!lastTimestamp) return true;
const exclusionPeriod = 7 * 24 * 60 * 60 * 1000; // 7 days
const now = Date.now();
return (now - lastTimestamp) > exclusionPeriod;
}
Use this logic inside your runner function.
🧩 Segmenting Your Mutual Exclusion Policy
Set different rules per experiment category (e.g., checkout vs homepage):
async function recordCategorizedParticipation(experienceKey, category, variationKey) {
await userContext.updateVisitorProperties(visitorId, {
[`in_${category}_experiment`]: true,
[`in_experiment_${experienceKey}`]: true,
[`last_${category}_timestamp`]: Date.now()
});
}
async function hasParticipatedInCategory(category) {
const props = userContext.visitorProperties || {};
return props[`in_${category}_experiment`] === true;
}
async function runCategoryExclusiveExperience(experienceKey, category, attributes = {}) {
const inCategoryExperiment = await hasParticipatedInCategory(category);
if (inCategoryExperiment) {
console.log(`Visitor already in a ${category} experiment. Excluding.`);
return null;
}
const variation = await userContext.runExperience(experienceKey, attributes);
if (variation) {
await recordCategorizedParticipation(experienceKey, category, variation.key);
}
return variation;
}
// Example usage
const checkoutVariation = await runCategoryExclusiveExperience(
'checkout-button-test',
'checkout',
{ locationProperties: { path: '/checkout' } }
);
📋 Implementation Best Practices
-
Consider Session Persistence: Use persistent data if your policy spans sessions.
-
Handle Initialization Race Conditions: Ensure SDK is ready before evaluating exclusions.
-
Monitor Your Exclusion Rate: Avoid overly restrictive rules.
-
Test Your Implementation: Simulate visitor flows and test edge cases.
-
Document Your Approach: Ensure internal alignment and maintainability.
🏁 Conclusion
Preventing cross-experiment contamination is essential for maintaining the validity of your testing program.
-
Recommended: Use platform-level audience targeting.
-
Optional: Add client-side exclusion logic for more control.
-
Best practice: Implement both where possible.
By doing so, you'll ensure your experiments yield trustworthy insights and data-driven decisions.
📌 Note: Adjust logic depending on your specific SDK version and testing setup. Always test thoroughly before deploying.