A/B Testing MVT

14 Sep Make The Right Call With A/B Testing & MVT

Following up on our latest article on the advantages of testing strategies prior to launching them, we now go more in depth into how both of these testing approaches work and when to use them.

Both tests allow you to test your audience’s sensitivity to significant or minor changes. By applying around with these two approaches you get to better understand your customers, tailoring your service to their interests and preferences. The outcomes are greatly positive and have proved to bring increased interaction from users as well as greater service satisfaction. So, how do both of these tests work and how do they differ?


A/B testing is also referred to as split testing. A/B stands for the two “variants” that you will be operating. For the sake of this explanation, we will use the example of a voucher release. Let’s say that you plan on releasing vouchers or discounts to your customers to ensure their renewal. You must first determine your targeted population. Let’s focus on the grouped population of: “rare visitors”. As the aim of this campaign would be to encourage “rare visitors” to renew their membership for the new year, the motivational factors can be different. A/B testing allows you to segment this population into two samples and have one version of your offering variate from the other. It then measures the outcome and performance of the changes done on each sample population so that you can see which strategy and variable make them tick.

Going back to the previously mentioned situation, you would release a voucher for one-half at a 20% discount. On the other hand, you would release a different voucher for the remaining half with access to more premium content. The results of an A/B test will provide you with the insights you need to make a firm decision on whether to follow or not, that strategy for the foreseeable future.


…things get a little more tricky here. When performing an MVT you will not be testing one action against the other one as in an A/B test. MVT plays with a wider set of variables. More variables allow for more changes. The changes you will perform will be more subtle and thus provide you with more accurate knowledge on the effect that more detailed factors have on your customers.

To illustrate this explanation, let’s say you wish to increase playtime by 10%. You, therefore, decide to test this among one sampled population (e.g. returning users, very active users, etc.), it is important to note that the greater a number of users at hand the better and more accurate the result. To increase the playtime of your users by 10%, you would use the sampled population and split them into the amount of variables that you wish to test plus one for the control sample that will serve as the reference. Let´s say that you select three key active variables that you think affect play time: content based recommender, collaborative filtering recommender, and social media recommender. To carry out the MVT and see which of these factors are crucial towards you reaching your goal, you will take your sample and divide it into four groups. The first group will be the control sample, the second group with content based recommender active, the third group with collaborative filtering recommender active and the last group will have social media recommendation active. Thanks to the outcome of this test you will be able to see which one of these variables has the greater effect in reaching your 10% increase in playtime goal.  


Are you a media company interested in testing all kinds of decisions and gain a strong competitive edge?  Wondering how NPAW’s YOUBORA is uniquely able to segment audiences and run tests on them? Click here to schedule a meeting with one of our Business Intelligence Experts.

Just another thing to think about from us here at NPAW.