What’s Next for Digital Advertising? Creative Testing

Reposted from my employers blog 

Every planner lit up at the prospect of the perfectly optimized media that programmatic advertising promised. The latest in a long line of industry Cinderella stories, programmatic represents the most significant boost in performance since the IAB’s adoption of Rising Stars. But with every rise comes a plateau prediction or two, indicating that programmatic eventually leads to diminishing returns.

So marketers need to ask themselves, “What’s next?” How can brands and agencies continue to eek results and improve performance once programmatic hits the ceiling? The answer is clear: creative optimization.

Disclaimer: there is plenty of technology at your disposal—ad tech vendors, systems, metrics, and measurement—that can help you optimize creative. PointRoll is, of course, among them, but we are not going to talk about technology. Instead I want to focus on the broader idea of how to strategically conduct creative testing to experiment, learn, and yield statistically significant findings.

First of all, figure out what exactly you want to find out. Determine the objective of your test, and pick the testing methodology that corresponds. If you want to test the colors of a CTA button, use an A/B test and test each variable against one another. If you are interested in testing multiple aspects of a single creative (e.g., CTA color and image), then use a multivariate testing methodology. The goal should define the testing method.

Next you will need to determine how much of your media plan to dedicate to your test. The best way to approach this is to ask yourself how much of your media are you not willing to risk. A good starting place is 20% of your total campaign volume, keeping 80% unaffected by the testing parameters.

Following allocation, isolate the environment. Environments in testing can skew results significantly if you aren’t careful about them. For example, if you are testing which CTA performs best on mobile devices (i.e., swipe or tap) and you have desktop placements included in your testing group, you’ve introduced a distracting variable in your results. Which is precisely why you need a control group.

It is important to maintain some control over your test. Control groups should start out equal in size to your test group of impressions. The test group should be subjected to the variables you plan on testing, while the control group will represent your existing strategy with no alterations or variability.

Before you press the launch button, remember that it is imperative you keep the variables simple at first. The more objectives you add to your tests, the less clear the findings will be. You’ll be able to scale your tests out to include more variables as you progress through your findings, but trying to test everything at the beginning will muddy the waters and keep you from learning anything.

Once your ad tags are live, you get to monitor the performance data as it comes in. It’s tempting to tweak the variations while the test runs, but don’t. Doing so would mean starting over. We’re comparing the users’ reactions to creative versions as they run in real time. Any gaps in our test run can provide impure data.

Let it ride. Results only reveal themselves when a test reaches statistical significance. What’s statistical significance? Statistical significance is a result that is not likely to occur randomly, but rather is very likely to be attributable to a specific cause. For creative testing, you probably want to ensure each version has a chance to reach a set number positives (i.e., clicks, interactions, video plays, or whatever your metric of success is). Calculators are available to help you determine how many positives you will need for your testing. For this example, using an A/B test with “click” as the metric of success, and working backwards from the benchmark of 0.2%, obtaining a set number of 100 clicks (positives) would require 50,000 impressions. A multivariate test would require additional impressions, but the same rule applies.

With your test complete, your next task will be to analyze the results. Resist the urge to look at the results immediately. But after you immediately look at the results, start from the top again. Revisit the goals of your campaign, compare the performance data of each test group directly against the control group.

Not to be the bearer of bad news, but occasionally test results will be inconclusive. An inconclusive test typically means that the element you were testing doesn’t affect user behavior in a significant way. While it may seem like a wasted test, any conclusion is a step forward: you can cross this one off of the list and move forward.

Objective results are always better than subjective assumptions. Creative testing is necessary because it is the only way to empirically measure the response of your audience to your message. Not only is creative testing vital, it’s relatively simple to implement, and your campaigns will flourish as a result.