If you or your marketing team is running a Customer Experience Optimisation (CXO) program (or Conversion Rate Optimisation (CRO) program) and you validate your ideas through experiments (often A/B tests), you may have had questions about if your company should be experimenting during high-traffic/transaction periods. Think peak seasons such as Christmas, Valentine’s day, Cyber Monday, etc.

TL;DR: Yes, you should, and you’re losing out if you’re not.


Why you should not be experimenting during peaks (what you’ve probably heard):

1. “Changing things during a peak is bad”

2. “Customers act different during peaks, so experiment results are useless”

3. “Most experiments have a negative effect so doing this during peaks will enlarge that negative effect”

4.  .… (let us know in the comments)

Let’s see if I can give you some arguments to convince your HIPPO otherwise 🙂

Myth 1: Changing things during a peak is bad

“Changing things is not good when you have a lot of people exposed to those changes. More people equals more risk!”

Before starting any experiment, you should have calculated the required sample size, which I explain in the below video.

 

Accommodate for the extra peak traffic (look at last year), scale down the % of visitors you’re testing on and the risk you take will basically be the same as with any other experiment.

Example: if you have a test capacity of 1,000,000 users in a month, but only need 500,000 for your experiment, you can set the test to run for only 50% of the users during the test period (we usually do 4 weeks). Do you have twice the amount of traffic during peak season? Then you only need 25% of your traffic and you can run 3 extra similar tests.

In this case, if the experiment is negative, you’ve only shown the negative variant to the exact number of people needed to validate your hypothesis and not more. Or, if the effect size is more negative than expected, the effect spreads out, allowing you time to spot this and stop the experiment if needed.

Related myth/assumption: not running experiments will make sure the website stays the same.

It’s 2019. Websites (especially webshops) change continuously: products go out-of-stock, promotions start or expire, newsletters are sent, Adwords campaigns are disabled or boosted, texts are changed…. right up to and even during the peak. Pausing experiments during peaks won’t stop any (risky) changes from being made.

If change is bad, all these changes should be stopped, which is probably not going to happen. All these other changes outside of the experiments aren’t being validated (or even measured), but that doesn’t mean they cannot have a negative effect.

Stopping experiments won’t stop any changes from being made. It WILL stop you from validating and LEARNING from the changes you make.

Running controlled experiments is not riskier than just changing things based on gut feeling or whatever worked previously. But it has the added benefit that we actually (try to) measure the change and validate if the change in performance was caused by your intervention. It gives you the opportunity to learn how to improve your business.

Myth 2: Customers act differently during peaks, so experiment results are useless

“Customer buying behaviour during peaks is very different compared to off-peak behaviour. So why bother?”

Example: when you sell gifts, it’s important that customers trust that you’ll deliver on time. Especially for holidays (when there is a clear deadline), trust in a timely delivery might be more important than price.

Maybe this applies to you, maybe it doesn’t. But shouldn’t you do your utmost to figure out how people are buying from you, ESPECIALLY during peak season? You might have multiple peaks a year and maybe over half of your yearly revenue comes from peaks. It seems to me it’s even more important and profitable to know exactly what makes your users click during that period.

It could make sense to validate different hypotheses or do different experiments during peaks because of the different situation. You have more traffic, higher conversions, and your customers might be driven by different needs. So it makes sense to adjust your tactics to that different situation.

If I had to choose, I’d rather experiment and optimise the website for peaks than for non-peaks. It’s way more effective.

Myth 3: Most experiments have a negative effect so doing this during peaks will enlarge that negative effect

“In the long run, we might improve. But on average the experiments themselves have a bad effect so we risk losing money when we do this during peaks.”

Your mileage may vary here, but if you have a decent process around your experimentation and use proper user insights as input, I can’t believe that is the case.

With our current setup, we end up doing 1 positive test for 1 negative test. And besides those, we have a bunch of tests that have no measurable effect. Looking at the revenue/profit it’s even better: positive tests gain more than we lose with negative tests.

For a client last year I calculated the gain of the testing process itself in terms of overall profit during the testing period: even if we would just run experiments and not put any monetary value on shared learnings, not implement any of the positive tests and not look at any potential future gain. Conclusion: even by just doing experiments my team is paying for itself.

So in terms of direct profit/loss, this shouldn’t be a big risk if you have the proper process in place. If you validate a proper hypothesis, you will still learn something for next time either way.

By the way: assuming most experiments have a negative effect suggests you believe you have a (near) perfect website with hardly anything that can be improved. Trust me, you’re wrong on that one. 🙂

Bonus points when experimenting during peak season:


Peaks = more traffic = more test capacity/confidence!

Get your testing in a higher gear and learn even more from your customers’ behaviour during these short bursts of transactions. Like I said above: calculate your required sample size. The extra traffic and conversion during peak seasons will give you extra capacity for testing! If you’re out of ideas, this might be a good moment to re-test some of your previous experiments to see if they still hold up.

Final considerations

You probably think you need to 1) improve your measurements 2) increase your testing confidence and/or 3) improve your successful test ratio.

I know most CRO practitioners do. Every day.

And like me, you’re probably already working on many improvements. But it’s by continually experimenting that you can improve the process and success rates, and learn from the changes. This is how you become the customer knowledge centre of your company.

At Vaimo we help brands, retailers and manufacturers all over the world to drive success in digital commerce.