{{ searchResult.published_at | date:'d MMMM yyyy' }}

Loading ...
Loading ...

Enter a search term such as “mobile analytics” or browse our content using the filters above.

No_results

That’s not only a poor Scrabble score but we also couldn’t find any results matching “”.
Check your spelling or try broadening your search.

Logo_distressed

Sorry about this, there is a problem with our search at the moment.
Please try again later.

By Horia Varlan on FlickrA well known and much discussed component of producing effective paid search campaign management is testing, with the ultimate aim of improving the performance of keywords towards a better ROI or a lower CPA.  

There are a number of approaches to this that can be effective, though in my opinion we find there’s one specifically that provides real value for our clients.

Our approach action change through data-driven decisions and we use AdWords Campaign Experiments (ACE) as a key part of our testing process to facilitate this. 

With ACE, we run keyword A/B positional tests that are contained, measurable, protected from seasonality and as a result, provide clear actions to drive incremental value. 

Firstly a quick recap, what is ACE?

ACE is testing functionality within AdWords itself that allows advertisers to test two different Max. CPC bids simultaneously for the same keyword.

The advertiser selects a traffic split – this could be 50/50, 60/40 and so on.  Once a split is in place, a Max. CPC percentage multiplier for the experiment version of the keyword needs to be determined. 

Example: 

I have a keyword with a Max CPC of £1 and set up an experiment with the following conditions:

  • Traffic split = 50/50.
  • Experiment bid multiplier = +50%.

That means for 50% of auctions the keyword’s ad rank will be calculated with a bid of £1 and the other 50% of time with a bid of £1.50.

AdWords Campaign Experiment Interface

How do you track it?

Web analytics software, like Google Analytics, Omniture or Coremetrics, is usually the norm for client reporting and therefore it’s necessary to track ACE within them in order to calculate accurate last-click, conversion rates and ROI. The {aceid} AdWords ValueTrack parameter allows you to do exactly that. 

Simply append this to the destination URL of a keyword that you are experimenting on:

Example: www.mywebsite.com/?ace={aceid}

A unique ID number will then be dynamically inserted into the landing page URL. This ID can be linked to either the control or experiment version of the keyword.

Why use ACE? 

To make data-driven decisions, you need data where the context is known and the conditions are the same. When comparing performance from two different time periods, a common issue is context, due to the need to ensure both are accurately comparable.

To know the context of a situation, you need to know the conditions and circumstances. Examples of these are: 

  • Paydays
  • Sales
  • Weather shifts
  • Seasonality

In order to avoid different conditions affecting the keywords, we need to test them simultaneously. The context can always be known because the same conditions apply, and as such, ACE ensures a fair test and therefore provides actionable data.

Case study

A keyword for one of our clients had a high ROI. It was in position two and we we wanted to push it into position one but required insight into the cost/reward implications before making the change across 100% of auctions.

There are some results you can fairly accurately predict. For example, CTR will likely increase and that CPCs will rise, but what about the conversion rate?

This will be the key metric when considering whether this tested bid can be profitably executed in the long term.

With the above hypothesis, the following conditions were applied:

  • Traffic split = 70/30 
  • Bid multiplier = +15% 
  • Duration = two weeks

Results:

  • CTR rose by 18.5%
  • CPCs rose by 8% 
  • Conversion rate increased by 71%!  

Consequent actions

The test ran for two weeks and consequently the experimental bid was applied to all traffic after it came to an end.

Two weeks after the experiment conclusion we compared 7 days of top line performance to the equivalent week in the previous month; it was clear that the experiment had been a success.

Top-line clicks for the keyword were up 12% with a 4% increase in CTR. Cost had risen by 24%, as expected.

However, most encouragingly, conversion rate was up 36% and revenue up 39%; this equated to a 12% improvement in ROI.

Summary

We expected more traffic volume, which we got, but also expected to have to sacrifice efficiency in order to gain it. In fact, the opposite happened, and we were only able to determine this through ACE.  By testing different variations of a keyword in the same conditions, the data that is gathered is accurate and actionable.

To keep an experiment within the same context is to remove speculation and replace it with certainty. Certainty, in a landscape that changes as rapidly as search does, is an extremely valuable commodity. 

Luke Boudour

Published 10 May, 2013 by Luke Boudour

Luke Boudour is Senior PPC Analyst at Forward3D and a contributor to Econsultancy. You can connect with Luke on Google Plus and Twitter. 

1 more post from this author

Comments (0)

Comment
No-profile-pic
Save or Cancel
Daily_pulse_signup_wide

Enjoying this article?

Get more just like this, delivered to your inbox.

Keep up to date with the latest analysis, inspiration and learning from the Econsultancy blog with our free Daily Pulse newsletter. Each weekday, you ll receive a hand-picked digest of the latest and greatest articles, as well as snippets of new market data, best practice guides and trends research.