A/B Testing

Definition of A/B Testing
A/B testing (also called split test or split testing) means that two variants of something (such as web pages, headlines, call-to-action buttons) are tested against each other and their performance is compared. This means that variant A and variant B are each shown to a part of the target group (played out randomly). The variant that achieves the higher conversion rate wins the test.
Goal of A/B tests
The goal behind A/B testing is to find out which variant performs better: the original or the modified version? In this way, you can gain important insights about necessary changes in software development, web design or online marketing, for example, which will lead to the optimization of your results.
For example, the desired user interaction (e.g., purchases or newsletter signups) can be
- data-driven,
- continuously
- and quickly
increase.
What metrics can be increased through A/B testing?
Here are some examples of which metrics can be increased through A/B testing - provided you actually use the findings from the tests for optimization:
- Downloads, e.g. of e-books
- Registrations for newsletters, waiting lists, free workshops, customer accounts, etc.
- Purchases in online stores
- Creation of user-generated content such as ratings and profiles
- Donations (number and amount)
You can consider the following questions:
- which design (color of fonts, backgrounds, buttons ...),
- which graphic elements (photos, charts, tables ...),
- which wording (also: which amount of text),
- which position on the website (pop-up, banner, footer ...),
- which user guidance and customer journey
convert best?
Variants of A/B tests
Depending on what you want to achieve with an AB test, you choose one of 3 variants to test. Here is the definition of each:
Classical A/B test
In this type of test, you run one or more variants against the original version in the split test, but only test one changed thing (example: button color) at a time.
Multivariate tests
In this variant, you can also test several changed variables on a web page at the same time. In this way, you check which combination achieves the better conversion - e.g. if you change the color of a CTA button and a wording at the same time. The different variables are then automatically combined into test combinations by your A/B testing tool. For example, if you have 3 photos and 2 wordings, you will get 6 different tests.
Forwarding test or split URL test
With this split testing principle, for example, entire landing pages can be tested against each other to see which structure, design or content works better. The tool you use for testing then redirects half of the users to one version of the landing page, the other half to the other version.
Procedure for AB testing
To test different variants, do the following:
- Define the goal of your AB testing. Example: More users should create a customer account.
- Formulate a hypothesis about the problem or obstacle: What could be the reason that more people do not create a customer account?
- Create the variants you want to test. For example, if it could be the wording, write the call-to-action text in an additional variant.
- Set up the AB test with an appropriate tool.
What tools are available for AB testing?
The following testing tools are available for your AB testing:
- Google Optimize
- AB Tasty
- Kameleoon
- Optimizely
Additional tools for problem finding and data analysis
Additionally, you should use a web analytics tool like Google Analytics. And in case you are struggling to identify potential problems on your website or product and form appropriate hypotheses, the following tools can help you:
- Surveys (as face-to-face interviews or onsite surveys)
- Heatmaps
- Screen & session recordings
What is important in A/B testing?
To get meaningful results in A/B testing, the following points matter:
- The test group must be large enough. For example, if the traffic on an e-commerce site to be tested is too low, it will take a long time before relevant results are available. Especially in multivariate testing, a company needs a lot of traffic until statistical significance is reached.
- When testing, it is important that only a single element is changed in version B at a time and tested against the original (version A) (exception: multivariate tests). The reason: only in this way can differences in conversion be attributed to the changes.
- Testing should not be stopped too quickly, even if a lot of data is already coming into AB Tasty or another tool of your choice at the beginning. As a rule of thumb, let testing run for about 14 days.
- Prioritize the upcoming AB testings according to impact and importance. A test roadmap helps you and your team to keep the overview.
Benefits of A/B Testing
AB testing is worthwhile for you and your company for the following reasons:
- Especially when AB testing is done repeatedly and changes are tested, results and products can always be improved. Because with repeated testing, developers can successively build on the experiences and learnings.
- Thanks to A/B testing, you and your team approach the optimization of your product in a structured and strategic way, instead of chaotically and unfocused. That's because you're relying on the results and data that AB testing gives you, instead of chasing a vague feeling about why, for example, the bounce rate of your email campaigns is so high.
- A/B tests not only increase the conversion rate of your websites, apps, etc., but also automatically improve the user experience. Because as testing progresses, you will learn more and more about your website visitors.