One of the key ways in which Swrve differs from many of our competitors is in our desire to not simply measure numbers, but rather enable our customers to move those numbers. If you think about it, that’s what marketing software should be doing anyway.

We’ve recently been working to ensure that we surface that difference in the product. Looking at it another way, I see a lot of products out there that support a variety of activity. That’s great in it’s own way, after all marketers like to feel they are doing something! But sooner or later smarter marketers are likely to ask the ‘why’? question, namely “WHY am I doing this?”, or “what difference is this campaign actually making to my business?”

A lot of our customers are sophisticated enough to fit in that bracket, and in our most recent release we wanted to help them see as clearly as possible the answer to that question. As a secondary consideration, we also wanted to ensure that when running A/B tests on campaigns our customers could measure success based on what really matters - not simple ‘click through’ rates.

I believe a test conducted and more importantly a winner selected on the basis of click-through rates alone is pure nonsense. It should be entirely obvious that deceptive ‘click bait’ in-app messages or push notifications are going to ‘win’ your tests. But equally obvious that this won’t necessarily convert into your business objectives being met. In fact in extreme cases you’re going to simply annoy your users - which is the last thing you want to be doing. But despite that fact, you'll find many products in the market that take exactly that approach to 'A/B testing' in-app campaigns.

How We Addressed The Issue

There is no great mystery to the direction we took in order to address these issues. Our UI team spent some time thinking clearly about what metrics mattered to our users, and perhaps more importantly how best we could show the delta between competing approaches. This isn’t quite as simple as it sounds: we worked through plenty of prototypes on the way…

 

Where we got to was a results screen that clearly showed comparative performance on the metrics that our customers had told us were most important (revenue, user engagement) plus both a primary and secondary event that were customizable. The ability to measure against two events allows users to track both starting and finishing a registration process, for example, or the purchase of virtual currency with real currency and then the purchase of virtual goods with the former. It's a vital part of most real-world apps, and we wanted to ensure we modelled for it.

We also looked to make it as easy as possible to define those metrics and events during the creation of the campaign itself. It becomes simplicity itself to determine how success will be measured for each individual campaign while the user is in the ‘campaign creation’ mindset.

As a result we’ve succeeded in making campaigns more effective than ever before: choosing between variants not on the basis of irrelevant metrics like click-through, but on those like revenue and engagement that really matter. And showing just how much each individual campaign is delivering for the business front and centre in the results screen.