What is A/B Testing?

In the field of UX, trends evolve very quickly, depending on new browsing behaviors, technology usability, learning, digital culture... But in terms of choice, variants, instinct professional, counter-intuitions are legion in the digital ecosystem.
How many decisions are made on the basis of influences and criteria foreign to the initial goals? So how to spread the culture of “data-drives” to define a set of UX solutions?
A/B testing remedies these shortcomings by enlightening you on refined results, on targeted user feedback, thus shortening your decision cycles. We are therefore going to discover together some good practices of A/B testing:
Identify your A/B testing objectives

Identify your goals

Planning your A/B tests well can make a huge difference in the effectiveness of your UX efforts. Set your conversion goals based on the metrics you want to determine in order to draw what is the most effective UX/UI variation compared to the original version. Once you have identified your goals, you can start generate the A/B testing strategy by scripting the different screens to be tested in accordance with the expected impact and the difficulty of implementing your objectives.

Prioritize your tests

One of the temptations often observed in A/B testing is the desire to start with scenarios with a fairly advanced degree of complexity: purchase funnel, multivariate tests, hierarchy of information, etc. But like all learning, for best results, it's best to start with the easiest. Strategically, we must start with tests that will combine real hope of conversion gain and ease of implementation.

Tailor your tests to your volumetrics goal

To obtain reliable results, it is essential to allow time for the convergence of your tests until you obtain a good reliability rate, even if a trend emerges quickly. It is recommended at least 5000 visitors and 100 conversions per variation. Beyond the volume, it is also advisable to adapt to the targeted sector, for example, it is more relevant to test a commercial site on the weekend than at the beginning of the week.
A/B testing kicks off

A/B testing kicks off

Tests should be run concurrently to account for any variation in timing. At this stage, each interaction made by the user is measured, counted and compared to determine which is the most effective ergonomics. It is often recommended in terms of UX to focus your research on quality over quantity, but from an A/B testing point of view this can be questioned knowing that it is a data-driven method, where the key factor is still the amount of data available.

Enter into a logic of continuous improvement

A/B testing validates a hypothesis but does not create it!! Running an AB test that directly compares a variation to a common experience allows you to ask targeted questions that enable continuous evolution of your user experience.

Temporal dynamics of AB Testing

The momentary experience experienced by the user during the interaction he performs during AB Testing is generally considered to be the heart of the test. Nevertheless, the objective of this kind of test is to distinguish several temporal phases of the tested user in order to analyze his sense of understanding facing an application or a site.

The statistical intelligence of A/B testing

Once your experiment is over, it's time to analyze the results… by putting the different versions of pages tested in competition in order to improve their efficiency. The sharing of this data makes it possible to disseminate the culture of data, thus helping in decision-making on the strategic change of ergonomics.
This comes to validate or not hypotheses by applying statistical reliability measures to the collected data. It is important to know with certainty whether the differences in results are due to chance.
The statistical intelligence of A/B testing
Today, A/B testing aims to find the best ergonomic compromise for all of its visitors and it is a real added value to strategic decision-making. Nevertheless, this has its limits, at the time of personalization, artificial intelligence, adaptability of uses where each content targets the typology of visitors... L'Isn't A/B testing a methodology that locks itself into a discourse of highlighting problems without solving them?
 
Carine Renaud, UX-Evangelist @CarineWhatElse  UXLab Foundation @UX-Republic