We push testing in this email marketing blog. A lot. For good reason. Testing is how you learn what works and what doesn’t. Testing is how you continually improve your email marketing.
However, there’s an aspect I hadn’t considered before with the seemingly simple A/B split test: knowing you can trust your results! Is a 2% difference really a 2% difference or only an anomaly? Did you test a large enough sample size to get an accurate comparison? How confident can you be?
ExactTarget’s Jay Miller has written up some wonderful advice to help you make fact-based decisions using the data you glean.
First, he says, you must figure out how big the effect you’re trying to measure really is. Are you striving for a 10% improvement or a 100% improvement? You have to know beforehand because that determines the sample size you’ll need. You’ll need a larger one to measure a smaller change and vice versa.
He also addresses the issue of confidence, and the margin for uncertainty you can allow for when testing email marketing results.
It’s easy to read even with the talk of statistics. Plus, he has included three online calculator tools you can use to help you set up your tests, determine your sample size and evaluate the results.
Read more of Jay’s recommendations on email marketing best practices for A/B testing here.