I have two teenage children (thanks in advance for your good wishes). Every evening when I come in from work, the first order of business is to help with any homework they haven't finished. One of the main skills I'm trying to teach them is how to study - how best to read and retain information - and one of the tricks I teach them is that before they even begin to read the material in question, they should come up with three clear questions.

It turns out that if you have questions in mind before you start the process, you retain more information, and not just the information which pertains to those questions. You remember more about the text as a whole.

The way we apply this here at Swrve, is to start every mobile marketing campaign or mobile a/b testing program with a clear hypothesis. If you have a concrete expectation in mind, you're in a position to read the data in a clear-minded way once you run the test or deploy the messaging campaign.

When you create a hypothesis, don't worry about proving yourself right. The willingness to learn from your users’ behavior is more important than making the correct guess first time out. What having the hypothesis will do is ensure that you are reading the data in a directed way rather than scanning seas of numbers looking for patterns.

By way of example, your hypothesis might be that a shorter tutorial means more people will get to the end of it - and in turn help improve retention rates. Looking at onboarding is after all number one in our Top 10 Mobile Marketing Best Practices!

We start with the root hypothesis that “the length of my tutorial has an impact on retention”, but that's a bit woolly, so we'll want to make it more specific. We encourage customers to use “If, then” statements.

Let’s try: “If I shorten my tutorial, I will increase Day 7 Retention”

Now we're getting somewhere!

Our next step would be to create several tutorials, of varying lengths (one way to do this is to create a series of linked In-App Messages), deploy versions to our test groups and monitor the results. As we’ve got a clear question that we’d like an answer to, it then becomes easy to quickly understand whether our change ‘worked’. And that’s when A/B testing is really powerful.

Looking At The ‘Whole Business’

The KPI we identified was Day 7 Retention but, although this should be our primary focus, we do recommend that you compare all relevant KPIs across test groups in order to get the full story. But caution is advised here. The whole point of working against a hypothesis is to avoid a random examination of every piece of data after the fact. If you do that, you WILL find patterns, and possibly misleading ones!

Thinking About Extended Tutorials

In this particular example - when a customer experiments with a shorter tutorial - we recommend that there are In-App Messages downstream in the app, educating users about the features or functionality which it may have seemed excessive to introduce during the initial onboarding experience.

You might want to wait until these becomes more relevant to those who’ve mastered the basic value proposition and are ready to explore further. It is also possible, of course, to only show these messages to users who have not discovered and used these features organically.

And, of course, don’t stop at one test or campaign.  Use the learnings from your first run to inform a new hypothesis, employing the “if, then” format, of course!