The title we focused on for this test was MusicRadar.com, a site that’s all about guitars, drums, and other such musical things that interest this up and coming London band, among thousands of other rockers, metalheads, punks, and even the odd hipster or two.

It had a number of people on their email list who were, in the traditional sense of the word, dis-engaged. For any of you reading who send out regular, information-based email newsletters, you’ll have heard this term before.

‘Dis-engaged.’ Generally, what this means is in the last X months, they’ve not opened or clicked on one of your emails. And, well, this is really annoying. Your content is great, your emails look beautiful, what gives?

Is it personal? You should for sure take it personally. You should feel very, very hurt about this.

What people normally do is send out a “Please love me” email

The prevailing logic in email marketing is to follow this logic:

  1. Pick an arbitrary date that defines dis-engagement (say, three months).
  2. Create a segment for that group.
  3. Send them an email that asks them to confirm they want to keep getting your messages.

This method, when done well, usually elicits about a 2-5% re-engagement rate. 

The thing is, it takes time to create the segment and write the email.  What Phil and I wondered was, ‘Is it worth the time?  What is the true ROI of this method?”

So, we ran a test.

A three sample random controlled experiment

We limited this experiment to cover point #3 above, to see if sending the ‘please confirm your interest’ email is the dominant strategy or not in this situation.

Note that the scope of this experiment was limited to this one point. An interesting test would be to look at time-series engagement data, determining optimal re-engagement points, and sending out emails at those points.

But, since this is a simple blog post and not a novel, that’s a topic to be looked at another time in another experiment.

Anyways, the arbitrary date we picked to identify ‘non-engaged records’ was three months, and the segment we created contained all non-openers and non-clickers from that period. We randomized the list and split it into three equal groups:

  1. Control: These records got sent the email newsletter as per normal.
  2. Re-activation: These records got a “Click here to confirm” email.
  3. Re-sends: These records got the same email newsletter as #1 but re-sent to non-openers daily for seven days after the initial send.

Our logic was simple: the control group, or ‘doing nothing,’ is the easiest option.  The re-activation group is the hardest option. The re-send group is the middle ground – you’re just re-purposing content, so all you have to do is click a couple buttons to automate the re-sends and it magically runs itself after that.

Here’s what the newsletter looked like:

MusicRadar Newsletter

And here’s what the re-activation email looked like:

MusicRadar Re-activation Email

Also, we wanted to give the re-activation group the best chance possible, so we split test three subject lines to a subset of the segment, and sent the ‘winner’ to all others.

Then, we re-sent to non-clickers three days after the initial send (all of this was automated.) We wanted to ensure that we were giving the re-activation email every opportunity available to ensure that our experiment’s results were useful and robust.

Methodology

Just to set your expectations, we’re not going to discuss Future’s exact list size here, or the number of dis-engaged records. That’s commercially sensitive information!

Instead, what we’re going to do is look at the relative lift delivered by each of the above methods.

We’re going to consider the Control Group as the baseline, and look at how much better or worse groups two and three performed when compared to it.

In essence this is what matters – the nominal numbers are just numbers; the relative numbers provide insight that you can apply in your own re-engagement strategy.

Nerd alert: the methodology we used treated each group was a normal approximation of a binomial distribution with a continuity correction.

Using this method passed the basic litmus test of validity (where np >= 10 and np(1-p)>=10.) We then used standard hypothesis testing to determine the level of statistical significance.  In essence we were comparing means of binary results (did re-engage or did not re-engage) on large data sets.

Therefore, while other methodologies also may have been valid, we are confident that the following results are statistically robust.

Ok, enough caveats and waffling, how about some results?

Results: re-activation email

The Re-activation message got a wee bit of traction, about a 5% uplift on the Control Group. Prevailing email marketing logic would give Phil and the MusicRadar team a hardy pat on the back. Well done dudes!

Hang on though. When putting the result through the rigors of statistical significance, a paltry 70% confidence level reared its ugly head. If this were a clinical trial for a new pharmaceutical, you’d run the risk of killing a lot of people.

If this were an email campaign (which it was) then you run the risk of spending a lot of time and money constructing re-activation campaign when there is no conclusive evidence that it delivers higher results than simply doing nothing.

Results: seven days of re-sends to non-openers

The Re-send method, so re-sending the newsletter daily for seven days to non-openers, got lots of traction.

Typical right? An email guy writing a blog about how sending out more emails is better than not. I’ve heard this before.

Well, we’re not making it up. Here’s the stats:

Resends vs Control group: 255% uplift, >99.9% confidence

Resends vs Reactivation group: 237% uplift, >99.9% confidence

The results don’t lie. A huge uplift delivered great results, and the incredible confidence level indicates an extremely high likelihood of this experiment being repeatable over and over again.

By simply clicking a few buttons and re-sending the same campaign out multiple times, Phil and the MusicRadar team generated more than triple the re-activations when compared with either doing nothing or sending a traditional re-activation message.

But don’t take it from me

Here’s what Phil from Future Publishing has to say about it:

Initially, we thought that the re-activation message gave a great customer experience… except in the end, no customers experienced it!  By re-sending our campaign multiple times, we were easily able to re-activate a huge number of records on our list… with no people or design cost, just pure transmission.

Less work and more emails drove higher response in this experiment

What we’re not saying is ‘send out the same email 100 times.’ But, if you’re planning on a re-activation campaign and are hoping for the standard response rates, aren’t you selling yourself short?

When considering the total cost of a re-engagement email, that is the time spent writing the copy, getting it approved, uploading it into your email platform and so on, does it really deliver ROI?

Have a go!  Run this test for yourself and see what the results are. The truth may surprise you.

Do you have any success or failure stories about re-activating your lists?  Share them below in the comments section.