Testing and optimizing your app’s user experience can be a tough job. It’s also critically important. Customers expect a seamless experience from an app that is comfortable to navigate and does its intended job well. Getting to that point means testing against real user data, enabling you to implement the right changes at scale.
So how do you do this effectively?
Implementing a testing program can be daunting, but there are some simple guidelines to follow that will help you do it successfully.
These roughly boil down to careful planning, smart execution, deep analysis, and continual iteration. This Blueprint will take a look at these steps in more detail.
Proper testing that yields useful and beneficial results requires careful planning. No two ways about it. In other words: testing starts before you start the test.
Every test should be purposeful. Anything else is a fishing expedition that will provide spurious results. It is essential to decide what metrics you wish to to measure or influence before you start planning the content of the test itself.
What are the specific KPIs that you are looking to measure and move? Engagement, for example, can mean a lot of things - time in-app, more frequent sessions, interaction with content, social sharing etc.
Think about what will constitute success for your test and set yourself goals for uplift.
Running tests with minor changes such as button colours aren’t going to provide you with the information that will propel your app’s user experience forward. Think bigger.
Focus on getting to the heart of your strategy. Think about what it is that you’re changing that will give users a different experience to what they’ve had before - overall design, user interactions flows, where are you deep-linking to?
Test delivery times, message tone, the creative content - all these can have a big difference on user behavior.
Don’t jump to conclusions based on a sample size that isn’t sufficient. It is common to see results that look ‘significant’ but are in fact anything but.
Finding a ‘winner’ based on logic like this is dangerous in the extreme. Allow tests to run for as long as is necessary to determine statistical significance.
Although it is vital to measure success against the metric you initially intended to influence, it is critical to look for ‘failure’ elsewhere. To state the obvious, any fool can increase the sales of an item by heavily promoting or discounting it - but what was the impact on total revenue?
More subtly, does an early registration campaign improve registrations - but damage overall retention? Always look for negative side effects - but also establish you aren’t merely observing random movement.
Testing is not a ‘do once and be done with it’ process. It is iterative: you plan, you run the tests, you analyse the results, and then do it again. You have to re-test any assumptions that you derive from the results. And you need to continue to refine based on the results you have seen. This is a process that demands long-term commitment.
Plan your test well. know what you’re measuring, who you’re targeting, and how you’re going to measure the results.
Make sure your test variations are substantial and purposeful. Big changes lead to big results.
Analyzing test results is about more than statistical significance. You need to use your intuition to determine what aspects of the change are important and then put your assumptions to the test.
Do not rest on your laurels after a successful test. Continue to refine your app, and re-test any assumptions that you have made. Keep on truckin’!