Lower Head


E-Marketing Performance Blog

PPC Testing Made Easier with AdWords Campaign Experiments

If you take your PPC campaigns seriously (why wouldn’t you?), you’re always testing.  Always.  It’s the only way to accomplish long-term growth and gain insights that will translate into all of your other marketing channels.  One problem that has been inherent since the beginning of PPC is the inability to do true A/B split-testing with variables like keywords, bids, ad text, ad groups, match types, dynamic keyword insertion, etc.

Yes, you could test them, but only by comparing metrics from different time periods (except for ads).  For example, you’d have to run ads at a certain bid price for a while, change it, and run them at the new bid price for a while.  Then, you’d have to compare the results from different time periods.  The problem? When you would compare the results, you would be likely to assume the differences in those key metrics to be the result of the changes.  But fluctuations in demand, shifts in competitor tactics, and uncontrollable circumstances (special events, etc.) can complicate things.

Google’s example of this involves advertising for soccer balls.  “Let’s say you’re advertising soccer balls, and you decide to increase your bids to get more traffic. Two days later, the World Cup starts, and your clicks and impressions increase substantially. If you had simply raised the bids in your campaign without running an experiment, you wouldn’t know how much of the increase in traffic is due to the World Cup, and how much is a result of you increasing bids.”

Let’s say you raised your bids at the beginning of June and noticed this trend when doing analysis in July….

Alright, looks great.  Let’s go ahead and keep that new bid.  What?  What’s that?  That might not be the best thing.  Well now, why would that be?

Ouch.  That’s web search volume trends for that keyword phrase.  Not so fast my friend.

Enter the newest “seedless watermelon” in the AdWords system called AdWords Campaign Experiments (ACE).  With ACE, you can run simultaneous split tests with most of the key variables in your campaigns by splitting traffic between you “control” group (original) and your experiment group…AND…you can analyze the results of your tests before you apply them to all auctions.  This lowers the risk of diving into new, unproven strategies by enabling you to control the amount of traffic you send to your experimental groups; which ultimately helps you make better decisions in your optimization efforts.  You can split your traffic in 10% intervals from 90/10 all the way to 10/90.

The cool thing about this is that if you want to run a low-risk experiment  and send 80% of your traffic to your control group and 20% to your experiment group, you can analyze the results and find if the changes performed better.  If they did, then you can run what is called a holdback experiment before you fully applied the changes to your campaigns.  A holdback experiment involves running the exact same experiment again, but this time with the control at 20% and the experiment at 80%.  This way, you confirm that the positive effects of your experiment are truly there as the experiment is exposed to a larger amount of traffic.

When you go to analyze an experiment, you want make sure that the statistical differences in your numbers is meaningful rather than the result of random chance.  Statistical significance is calculated based both on the number of auctions your campaign participated in, and on the size of the differences in metrics. Google AdWords provides icons in your campaign when the math indicates that you can be 95%, 99%, or 99.9% confident that differences are meaningful, and not just due to chance.

The icons are arrows that show you whether a particular element you’re experimenting with has achieved statistically significant results, and how confident you can be that those results will carry over to your campaign if you apply the experiment (one arrow meaning there is a 5% probability your results occurred due to chance, two arrows means there is a 1% chance, and three arrows means there is just a 0.1% chance these results are due to chance).

The introduction of this new feature saves the account manager time and makes testing in your AdWords account much more accurate, efficient and profitable.

Mike Fleming

Michael Fleming

Comments are closed.