Advertising during the Super Bowl is a huge investment. Brands spend upwards of $5 million on a 30-second spot because it’s the biggest TV advertising platform of the year. It’s an opportunity to reach over 100 million viewers with messages about brands, products and services – but what kind of return on investment (ROI) do these ads generate? How can we quantify whether an ad actually causes someone to change their mind about something?
We can answer those questions with data science.
Before Super Bowl 50 we chose four ads to test across different industries using Civis Analytics’ proprietary multi-level bayesian creative testing algorithm to see which ads directly influenced the attitudes and intentions of the viewers. See our explanation below to understand how this methodology is different than other studies.
The result of this analysis can be boiled down to two numbers. The first is the ad’s ‘Probability ad works’ to increase brand favorability or intent to purchase given the experimental result. The next thing we look at is the ad’s actual observed ‘Impact’ on brand favorability or intent to purchase overall and among subgroups.
Here’s what we found:
Budweiser’s Helen Mirren no-nonsense public service announcement
Helen Mirren’s no-nonsense public service announcement against drunk driving actually increases the likelihood someone says that they’ll have a Bud the next time they have a cold one.
Ultrasounds + Doritos
It is attention-grabbing but not necessarily appetite-inducing.
Civis Analytics believes that when we provide numbers to inform a business decision, those numbers should be immediately actionable. That’s why we boil down the results of Civis Analytics’ rapid message testing to a few meaningful metrics.
Probability ad works. The ‘Probability ad works’ metric helps us quantify whether our observed evidence is likely to be true or random noise. After observing an ad’s impact on a survey respondent’s brand favorability or intent to purchase we can determine the probability that the ad had a positive impact. This is incredibly important when testing ads because the actual impact on an attitude or behavior from an ad is usually small. Our methods help distinguish between the signal and the noise and help clients make decisions under uncertainty.
Impact percentage. The bars for each group show the average treatment effect (positive or negative) of being exposed to the ad. Positive bars (green) show the observed improvement in favorability or intent to purchase. Just like in clinical trials we’re comparing the difference between those who received the treatment (saw the ad) and those who received a placebo (did not see the ad). And just like in clinical trials this methodology allows us to make claims about causality rather than just correlation. To read more about this check out the Wikipedia page for Randomized Controlled Trials.
This post was authored by Masa Aida, Richard Barney, David Martin, and Junesoo Seong.