Survata Presents at ARF Audience Measurement 2017

Survata and Lotame

After being selected as a Winning Paper by the Advertising Research Foundation, we were invited to speak at their Audience Measurement conference last week in New York. We shared a new technology to improve online advertising research: Segment Validation. We believe that advertisers should validate the advertising audience segments used in programmatic ad buys. Otherwise, how do you know if the segment really has what it claims to have? Here’s Mark Thompson, CEO of the New York Times, referring to this issue, “When we say someone is a member of the audience is a female fashionista aged 20-30, what’s the probability that that’s actually true?”

Not all audience segments are created equal. A segment’s usefulness is a balance between the size of the segment, and how much of the segment actually matches the desired characteristic (the technical term for this is Precision vs Recall). Buying unvalidated audience segments is like buying TV ads before Nielsen ratings, or bonds before bond ratings. Programmatic advertising is still the wild west.

We are not the only ones thinking about this. Here’s an anonymous ad tech exec, who’s anonymity most likely stems from the fact that he’s fully aware that his company is delivering shoddy data. “A lot of the data that informs programmatic media buying is unreliable and conflicting. So what brands are spending their money for isn’t necessarily the thing they think they are spending their money for.”

As an example, suppose you want to advertise to moms. There are multiple “Moms” segments from different vendors, and other segments like “New Moms” and “Moms with 2-year-olds.” Which of these segments should you choose in the ad buy? You can’t really know until you’ve validated them. Segments boost signal to varying degrees.

Due to the partnerships and technical integrations between Survata and all of the major DMPs, validating audience segments is straightforward. Even better, the data is automatically pushed back into your DMP, where look-alike algorithms can expand the size of your target audience segment.

Survata is driving the next generation of ad research. In addition to Segment Validation, we also offer Segment Creation based on targeted psychographic profiles. We also conduct ad effectiveness studies in real time, allowing for optimization of ad spend allocation daily instead of just post-campaign reporting.

Our ARF presentation on validating programmatic audiences can be found in the slideshow below.

Online Ad Effectiveness Research Grows Up

“The days of giving digital a pass are over. It’s time to grow up.”
            -Marc Pritchard, Chief Branding Officer, Procter & Gamble, January 2017

When the CBO of P&G tells us to grow up, we listen. And after speaking with clients at last month’s Media Insights Conference, it’s clear that there’s consensus: online advertising research needs to get more sophisticated.

We’re here to help. IAB breaks research down into phases: design, recruitment & deployment, and optimization. We’ll walk through each phase and determine what’s most in need of “growing up.” We’ll also include questions to ask your research partner to help increase the sophistication of your ad effectiveness research.

Design

Let’s start by acknowledging that statistically sound online ad effectiveness research has not been easy to implement at reasonable cost until recently. As IAB notes, “Questions around recruitment, sample bias and deployment are hampering the validity of this research and undermining the industry as a whole.”

Just because perfect research design is challenging to achieve doesn’t mean that advertisers should settle for studies with debilitating flaws, leading to biased, unreliable results. In addition to challenges inherent to good research design, most ad effectiveness research partners have systematic biases due to the way they find respondents, which must be accounted for in the design phase. There has been innovation in this space within the past year using technology to reduce or eliminate systematic bias in respondent recruitment. 

Assuming you’re able to address the systematic bias of your research partner’s sampling, the major remaining challenge is how you approach the control group. At Survata, we think about this as a hierarchy:

Online Ad Effectiveness Research Control Group Framework

Using a holdout group is best practice, but implementing it requires spending some portion of your ad budget strictly on the control group. In other words, some of your ad budget will be spent on intentionally NOT showing people an ad. A small portion of people in the ad buy will instead be shown public service announcements to establish the control group. We love the purity of this approach, but we also understand the reality of advertising budgets. We don’t view holdout as a requirement for sound online ad effectiveness research. Smart design combined with technology can achieve methodologically sound control groups without “wasting” ad budget.

Along those lines, the Audience Segment approach has become de facto best practice for many of our clients. Basically, you create your control group from the same audience segment that you’re targeting in the ad buy. This isn’t perfect, as there could be an underlying reason that some people in the segment saw the ad but others didn’t (e.g., some people very rarely go online, or to very few websites), but it’s still an excellent approach. It’s the grown-up version of Demographic Matching.

Demographic Matching, in which the control group is created by matching as many demographic variables as possible with the exposed group (e.g., gender, age, income), is still a very common strategy. It’s straightforward to accomplish even using old online research methodologies. As online data has allowed us to learn far more useful information about consumers than demographic traits, this approach is dated.

Simply sampling GenPop as a control is undesirable. The results are much more likely to reveal the differences between the exposed and control groups than the effectiveness of the advertising.

Questions for your research partner

  • What are known biases among respondents due to recruitment strategy?
  • What is your total reach? What percentage of the target group is within your reach? Is it necessary to weight low-IR population respondents due to lack of scale?
  • What’s your approach to creating control groups for online ad effectiveness research?
  • For Demographic Matching, how do you determine which demographic characteristics are most important to match?
  • How do you accomplish Audience Segment matching?

Recruitment/ Deployment
 
Historically, there were four methods to recruit respondents / deploy the survey: panels, intercepts, in-banner, or email list. To stomach these methodologies, researchers had to ignore one of the following flaws: non-response bias, misrepresentation, interruption of the customer experience or email list atrophy. In our view, these methodologies are now dated since the advent of the publisher network methodology.

The publisher network works by offering consumers content, ad-free browsing, or other benefits (e.g. free Wi-Fi) in exchange for taking a survey. The survey is completed as an alternative to paying for the content or service after the consumer organically visits the publisher. In addition to avoiding the flaws of the old methodologies, the publisher network model provides dramatically increased accuracy, scale, and speed.

Questions for your research partner

  • What incentives are offered in exchange for respondent participation?
  • What are the attitudinal, behavioral, and demographic differences between someone willing to be in a panel versus someone not interested in being in a panel?
  • What are the attitudinal, behavioral, and demographic differences between someone willing to take a site intercept survey versus someone not interested in taking a site intercept survey?
  • How much does non-response bias affect the data?
  • Are you integrated with the client’s DMP?
  • How long to get the survey into the field, and how long until completed?
  • How does the vendor ensure that exposure bias doesn’t occur?
  • How does the vendor account for straight-liners, speeders, and other typical data quality issues?

Optimization
 
An optimal ad effectiveness campaign returns results quickly, so that immediate and continuous adjustments can be made to replace poorly performing creative, targeting, and placements with higher performing ones. We call this real-time spend allocation. It’s analogous to real-time click-through rate optimization, as it relies on solutions to the same math problem (known as the multi-armed bandit).

By integrating with DMPs, ad effectiveness research can be cross-tabbed against even more datasets. The results will yield additional insights about a company’s existing customers.

Questions for your research partner

  • Are results reported real-time?
  • How much advertising budget is wasted due to non-optimization?
  • How can DMP data be incorporated to improve ad research?

Conclusion

Flawed research methodologies can’t grow up, they can only continue to lower prices for increasingly suspect data. For online ad effectiveness research to grow up, new methodologies must be adopted.

25 Million Verizon/AT&T Subscribers Up for Grabs in Next Six Months

Verizon and AT&T are the dominant US wireless carriers, with over 70% of subscribers between them. T-Mobile just announced a good quarter driven by new customers, mostly poached from AT&T. We wondered, how many people are going to abandon AT&T and Verizon this year?

We asked AT&T and Verizon wireless customers – specifically those who pay the bill or whose spouse pays the bill – if they intend to switch wireless carriers in the next six months. As you can see below, the majority of people aren’t interested in switching carriers – but there’s a sizable portion of contracts up for grabs.

survata tracks AT&T and Verizon customer loyalty

Combining subscribers “somewhat likely” and “very likely” to switch equates to 14.5 million AT&T customers and 10.5 million Verizon customers up for grabs in the next six months.1

Where are subscribers likely to leave AT&T and Verizon going? 78% of switchers will stay in the top 4 (Verizon, AT&T, T-Mobile and Sprint), while 22% will move to smaller carriers.

Click here for full results.

1. Taking margin of error into account, at 95% confidence interval, the range of customers up for grabs is 9 to 20 million for AT&T, and 6 to 15 million for Verizon.

One Third of Midwestern Smokers Smoke at Least a Pack Per Day

According to the CDC, the percentage of American adults that smoke cigarettes is at its lowest point since they started tracking (currently about 17% of American adults smoke). That seems good. But we wanted to dive a little deeper. Are American smokers also smoking fewer cigarettes per day? It turns out, no. In November 2015 and February 2016, the distribution of smokers was nearly identical between periods. So while the percentage of American smokers may be going down, the amount of cigarettes smokers consume is steady.

We did notice differences in cigarette consumption among regions, as you can see in the chart below (which includes both the November 2015 and February 2016 periods of the survey).

survata tracks smoking

30% of smokers don’t agree that smoking is bad for your health. This group is ten times as likely to agree with the following statements about smoking: “It’s not as bad for your health as most people think” (18% vs 2% of those who agree that smoking is bad for your health) and “As long as you have the right genes, it’s not bad for your health” (12% vs 1% of those who agree that smoking is bad for your health).

Check out full results here, including the percentage of smokers who do it to reduce stress.

Millennial Intent to Cut Cable Doubles

The young continue to cut the cord on cable. By our measurements, 27% of Americans age 18 to 34 don’t pay for cable or satellite TV service, and another 8% intend to join them in the next six months. That’s an accelerating rate, as you can see in the chart below.

survata tracks Airbnb usage

Of those intending to cancel cable/satellite TV, 55% cite the high cost as the reason. What are cord cutters watching? Here are their subscription numbers: Netflix (54%), Amazon Prime (24%), Hulu (19%), and HBO Now (5%).

You can see full results here.

How Many Travelers Consider Using Airbnb?

According to investors, Airbnb is worth $25.5 billion. What? Marriott just agreed to purchase Starwood for half that amount. This seems crazy. Here at Survata, we conduct consumer research. We don’t have a stake in Airbnb’s success, we were just curious: How many people even consider Airbnb when they book travel accommodation? Turns out it’s a small number, but it’s growing fast.1

survata tracks Airbnb usage

Most people who consider Airbnb for personal travel are young: two thirds are age 18 to 34. Hardly any business travelers consider Airbnb when booking business travel. Of the 2,015 respondents across both periods of the survey, only 30 reported considering Airbnb for business travel (1.5%).

55% of Airbnb’s US revenue comes from just 5 markets that hold 30% of active units (New York, Los Angeles, San Francisco, Miami, & Boston). Our data shows a similar over-representation of those five cities for personal travelers: they make up 15% of our respondents but 29% of Airbnb customers. There’s likely a local network effect going on – people hear about their friends hosting on Airbnb, then decide to consider it when they leave town.

You can see full results of our Airbnb tracking survey here.

Footnote
1) One statistical qualifier: the margin of error on personal travelers is 3.2%. Perhaps December was a little low, and March was a little high, and in June this trend won’t seem so alarming!

When Will Americans Give Up Car Ownership?

Based on their most recent investment round, Uber is worth more than Ford. GM just invested $500 million in Lyft. Google’s self-driving car recently crashed into a bus, and in a few years there will be millions of self-driving cars crashing into buses globally.

It got us thinking here at Survata… does anyone even drive anymore? So we’ve been asking consumers about cars. It turns out the vast majority of Americans still own a car, but we’ve found an interesting group: the 45% of American car owners who would be willing to give up car ownership. The implications of this are huge. What could cause it? It differs a bit by age, as you can see in the chart below.

survata tracks ridesharing and self-driving car consumer perceptions

How weird is it that self-driving cars are ahead of ride sharing as a reason to give up car ownership? When people want to be driven around… most prefer that a robot do the driving. Frankly it’s a sensible risk assessment, and anything sensible on this topic is a relief after how many people have lost their minds over Uber.

We expect consumer perception to be an important influence on the politicians and bureaucrats who will decide who self-driving cars should be programmed to kill. We’ll keep our finger on the pulse.

Forbes and Wired Change Consumer Perceptions of Ad Blocking

Forbes and Wired recently tried an experiment. They asked website visitors using ad blocking software to turn it off – and denied them access if they refused. This was a risky move for a few reasons.

1) The media coverage of this experiment could increase awareness and usage of ad blocking software (our data does show an uptick in consumer awareness)
2) They could accidentally serve malware ads to consumers after demanding ad blocking software be turned off (Forbes has been accused of this)
3) They could lose readers (while ad blocking readers don’t generate ad revenue, they contribute to total page views which still matters to some advertisers)

We asked consumers about ad blocking in November last year, and again last month. Ad blocking software usage has stayed steady at about 9% of total American internet users.1 But our data2 shows that Forbes and Wired have likely helped move the needle on an important metric: intent to start using ad blocking software.

survata tracks Netflix's most watched originals

To sum up, though awareness may have ticked up, intent to use dropped significantly. That’s a win for online publishers. Was this a blip or a turning point? Check back with us in May to find out (or add your email address in the box on the right and we’ll send you the next batch of results).

Footnotes:
1. A recent estimate put the US percentage of ad blockers at 15%, though critics have pointed out that the publisher of the data has a vested interest in that number being as high as possible.
2. Chart displays rounded values. The change in “I haven’t used it, but I think I’m going to try it out” is outside the margin of error, i.e., is statistically significant.

We’ve Launched a Netflix Tracker

Which Netflix shows are the most watched? Per their latest letter to investors, Netflix doesn’t plan to tell us.

“We don’t release title‐level ratings as our business model is not dependent on advertising or affiliate fees.”

Which is a shame for us, but logical for Netflix. If the TV networks had a clear picture of viewership, they would be more effective at negotiating licensing deals. If investors could track which big budget Netflix Originals are flops, it could negatively impact the company’s share price. It makes sense for Netflix to hoard this data, and only release the good news.

Well, that’s boring. So we’re tracking monthly viewership of Netflix TV shows. Every month we’ll ask two thousand Netflix subscribers1 which shows they’ve watched in the past 30 days and which Netflix Originals they’re most excited about watching in the future. We’ll see which new shows are hits or flops, and which returning shows build momentum or start to decline.

The chart below displays the top 20 most-watched shows2. See full survey results in a live dashboard.

survata tracks Netflix's most watched originals

Notes on methodology
How do you ask consumers which TV shows they’ve watched on Netflix? It’s deceptively challenging. If we ask the question as free response we rely on unaided recall and introduce typing fatigue bias. Offering a pick list of hundreds of Netflix shows is equally impractical. The list won’t fit on one page (especially on mobile), and we’ve noticed that consumer attention tends to wane beyond eight multiple choice answer options.

We ultimately used both approaches. We started with an open-ended survey asking 668 Netflix subscribers which shows they’ve watched in the past 30 days. After cleaning the data, we had a list of 51 shows licensed by Netflix mentioned by at least four respondents. We then checked this against the “Popular Shows” section on Netflix to make sure we weren’t missing any of those (we weren’t). For the Netflix Originals, we started with Wikipedia, checked Netflix.com again (which totally counts as work) and supplemented with Google searches. We ended up with 24 Netflix Originals in January.

We created our survey using a question format that displays a random list of eight shows to each respondent. Respondents simply check a box next to each show they have seen in the past 30 days. This approach eliminates the biases mentioned above, but reduces the effective sample size for each show. While we survey 2,255 total respondents, each Netflix Original was shown to approximately 751 respondents (2,255 total respondents * (8 randomly selected shows / 24 total shows in list) = 751, which has a 3.6% margin of error). Each licensed show was shown to approximately 353 respondents (2,255 total respondents * (8 randomly selected shows / 51 total shows) = 353, which has a 5.2% margin of error). Due to the smaller effective sample size for each show, the data will be a little noisy from month to month, especially for licensed shows. But that’s fine. We’re interested in the general trends over time.

If you’re in the media business and want more precise data or different questions, create your own survey, or join Survata Pro and we’ll do the heavy lifting for you.

Footnotes
1: We count a subscriber as anyone who has access to Netflix’s streaming service, even if they don’t pay. For January, 69% of our respondents said they pay for the subscription, and 31% use a friend or family member’s subscription.
2: Survey conducted on January 12th, 2016.