How Implicit Association Measurements Lead to Explicit Business Results
- Define implicit research technology.
- Contrast “fast explicit” techniques with true implicit techniques.
- Make a strong case for combining subconscious data with the best conscious measures the industry has developed (not stated likelihood to purchase or explicit appeal questions).
- Validate the power of incorporating true implicit research by forecasting sales of products with up to 94% accuracy before the sales occur (not the common backward looking models)!
- Demonstrate that all emotion measures are not equal, with research on research data from the Sentient Consumer Subconscious Research Lab.
>>Dr. Reid: Thank you all for staying late in the afternoon on Wednesday. I want to start by recognizing a few people in the audience: Clint Taylor, Sarah Hecht, Faith James, Christina Luppi, all from Sentient Decision Science. I want to recognize all of your hard work and contributions to what we’re presenting today.
Without your work, bringing our vision to the market research world wouldn’t be a reality. So thank you for that. And thank you to everybody back at our offices who are currently cranking on subconscious modeling; and trying to predict consumer behavior. They’re not here today, but their work is why we’re here today.
And I hope, when I talk about the subconscious here today, it’s not perceived as some crazy, creative, voodoo-type of research, that we just saw in the last session.
But that’s the fear, right? When we talk about the subconscious and what’s out there, and this idea that we’re trying to tap into something that people can’t actually tell us, audiences might think that it’s actually kind of voodoo!
So today’s talk is going to focus on validation. I’m going to first define what implicit research really is and contrast it to good, but non-implicit techniques. I’m then going to talk about how you integrate, or how you can integrate, conscious research methods with subconscious research methods.
And then we’re going to show you five case studies that illustrate how combining conscious and subconscious data together is more accurate in predicting actual in-market sales. We’ll try to do that in 13 minutes.
So the “can’t say/won’t say” problem in market research. A lot of us know the “won’t say” problem, which is we feel like people sometimes won’t tell us the true motivators behind their behavior. They don’t want to for self-presentation reasons; they’re worried about what you might think of them.
So we’ve been worrying about that in market research, and we’ve figured out tricky ways of getting around and asking probing questions to get around that “won’t say” problem. But perhaps the larger problem in market research is the “can’t say” problem, which is the fact that most people don’t have conscious access to the true drivers of their behavior.
And so if we simply ask them explicit survey questions on why they do what they do, or what’s most important for them; can we really expect them to give us answers that are reliable and predictive, if they can’t actually access the information.
We call this “can’t say/won’t say,” and it’s really about System 1 vs. System 2 processing in the human mind.
Just for a level set, how many people are familiar with System 1 vs. System 2 processing? A lot of interest in the subconscious here. Okay. I’ll touch on that a little bit, and how to integrate the two methods. So “can’t say” and “won’t say” are really important.
In order to get at the “can’t say/won’t say,” we need true implicit research techniques. We call this protecting business by protecting the science. You could also say something like, “show me the science, and we’ll show you the money.”
And we’re going to do that today. We’re going to show you the science, and then we’re going to show you the money.
So we want to define what implicit research technology is, and what an implicit research technique is. Because there are kind of floating definitions out there.
I would like to offer this one. Implicit research technology is a specialized set of indirect research tools that can reveal System 1 processing by measuring unintentional and uncontrollable responses to stimuli.
And we didn’t just make this up, this is based on the literature; based on Nosek et. al, 2011. And they define implicit research techniques as those that must not be direct, deliberate, controllable self-assessments.
So when we look at these and evaluate different research techniques, we’re going to evaluate them according to those three criteria. Is it indirect? Is it deliberate and a controllable self-assessment?
And what you’ll find with a lot of explicit techniques that are parading as implicit techniques is that they are indirect, but that they are deliberate and also controllable. So “indirect” does not equal implicit. “Derived” does not equal implicit. Reaction time, by itself, does not equal implicit.
There are other conditions that you have to meet in order for a research technique to truly be implicit. Here are some examples of some implicit research techniques [references slide].
Now we define it more broadly. a lot of people call biometrics and neurometrics under the heading of neuromarketing. And as I said yesterday, I’d like to get rid of that term, because there is no other kind of marketing than neuromarketing. It is all neurologically processed.
I’d like to think of these as implicit research techniques. They meet the criteria: they’re indirect, they’re not deliberate, they are uncontrollable.
And today, we’re going to focus on “implicit association techniques.” Implicit research technology is the broader category, and implicit association is a sub-category of implicit methods.
Each of these, we have in our consumer subconscious lab. In that lab, we do research on research, to validate it. And anything we bring to market is validated according to its enhancement of predictive validity.
If we put something into our software, it has to meet the criteria of enhancing predictive validity. It’s got to add something, of a predictive nature, to the tools that are in the market place.
And the five case studies that you’re going to see today are all research on research, from our lab, improving the predictive validity of these implicit methods.
What just happened? No, really! Don’t think of a white bear. You can’t help it. Right? When someone says “don’t think of a white bear,” what comes to mind?” A white bear. “Oh I wasn’t supposed to think about that! Stop!”
When somebody shows you a picture of a white bear, what happens? Your associations with white bears come to mind. Social psychologists use that trick to illustrate something called Automatic Irrepressible Cognition. We call it a “prime.”
So when we show you a white bear, or a brand, or a package, or a product, or expose you to an advertisement, we’re priming you. The associations that you have with those stimuli automatically come to mind. You can’t stop them from coming to mind. They come to mind, and then maybe you try to repress them. But they’ve already come to mind. That is System 1 processing. It’s associative in nature. Your associations with stimulus are automatically activated. It happens automatically, without your control.
The same is true when we show you a brand. Let’s do Coca-Cola, we are in Atlanta. Love Pepsi, but let’s do Coca-Cola [laughs]. If we show you the Coca-Cola brand, just like a white bear, your associations with that brand are activated.
So when you see Coca-Cola, what do you think of? Maybe global? Maybe happiness, that’s certainly a perceived perception of that brand. Refreshing might be an attribute that you think of that brand. Calories, maybe on the negative side. These associations aren’t always positive. But those associations that you have either move you toward, or away from whatever that stimulus is. If they’re positive, I’m moving toward, on average. If they’re negative, I’m moving away, on average.
We’ve known this in marketing for years: that we want our attributes to be top-of-mind. But now, we actually have a vocabulary for talking about it. There is a neural basis for “top-of-mind,” and it is the accessibility of those attributes when you’re exposed to a representation, in this case, of that brand.
So how can we measure those automatic associations? Here’s an example of a non-implicit technique. We call this a “fast explicit technique.” So let’s say I’m going to measure your response time. I’m going to show you logos on a screen, and I’m going to ask you to swipe towards yourself if you like the brand, and to swipe away from yourself if you dislike the brand. And then I’m going to time you to see how long it takes to make that judgement. I’m just trying to measure an affinity.
So as you can see there, Coca-Cola appears on the screen, you swipe it, I measure your response time. What do I have there?
I have an indirect measure, right? Because it’s a response latency. But, does it meet the other criteria? I’m asking you to make an explicit judgement. It’s intentional and it’s controllable. You can modify your answer. And you have to access System 2 thinking. You have to say to yourself, “do I like Coca-Cola?” and then you have to swipe. That is System 2 thinking. It’s if/then. It’s propositional. Even if I limit your response to less than one second, you still have to access System 2 thinking to make that answer. It’s intentional and it’s controllable.
But the point here is that, while that’s very valuable—it’s actually a good technique, and we’ll show how predictive it is—but it’s not implicit. In implicit research techniques, we need the judgement to be separate from the brand prime. So might show you a logo on the screen, like Coca-Cola, and then ask you to engage in a separate judgement task.
Emotions are going to appear on the screen. If they are negative, swipe them away. If they are positive, swipe them toward yourself. The brand appears on the screen for half a second. It’s a prime. Just like the white bear, you can’t stop your associations from becoming active. They influence your ability to make the subsequent judgement. If they’re consistent with that judgement, you can make it faster. If they’re inconsistent, it creates cognitive dissonance, and you’re slower and you make more errors.
So Coca-Cola. Is this emotion negative or positive? It’s negative. If I love Coke, that’s harder for me to do after seeing the Coke logo. That is indirect, uncontrollable, and an unintentional evaluation. I’m not asking you to tell me how you feel about Coke. I’m measuring how you feel about Coke by priming you, and putting you in a separate judgement task. And that’s implicit.
So, if we’re tapping System 1 in that way, how do we tap System 2? We need to make sure we’re not throwing out our best methods, that we’ve developed over the last 30 years.
System 1, as we’ve talked about, is associative. System 2 is deliberative. So when we do surveys, we’re asking people to deliberate. Deliberate on this and give me an answer.
But, that’s not the only way for us to get at deliberative processing. In fact, we have much more advanced ways to do it. And let’s put this in a product case study perspective. So let’s say, you’re trying to understand how well this product is going to do on the market. This is an actual case study, and I’m going to show you the sales results.
This product is coming on the market. It’s a shirt. It’s made by Calvin Klein and it’s offered for $49.99. One way you might evaluate this is to ask a likelihood-to-purchase question.
Now, probably at this conference I’m safe in saying, “gosh, we knew that these questions were not predictive a long time ago.” (You can’t always say that at every market research conference.)
But when we think about combining conscious and subconscious methods, let’s use the best conscious methods that we have. When you’re bringing a new product to market, and you’re trying to understand the influence of brand versus product versus price, you want some derived trade-offs.
It’s a deliberate choice, so you’re accessing System 2, but it’s derived data of that deliberate choice. It’s much better data.
So, here’s a choice-based conjoint study. We’ve got different products. We’ve got different brands. We have different prices. We can mix and match. We can isolate the influence of the product versus the brand versus the price on choice. It’s great data! We use this all the time to try to forecast sales.
That’s what we want to use. We get expected utility formulas, which are wonderful, to some degree. But we know that expected utility is lacking something in its predictive utility.
So let’s show you the sales results. We ran this study. A great client of ours, Macy’s, shared this case study with us and allowed us to share the data, so thank you for that.
They were coming to market with a new line of products for a brand, and they wanted to understand which products were going to be most successful in the market place. And we said “OK, let’s do this. Let’s measure a conjoint, so we have the rational, the conscious, and let’s do our implicit associations, the subconscious, so we have the System 1 processing as well.”
And by the way, we’re going to do this before you go to market with a product. So we made the predictions in April. They went to market with the product in May. So the product had already been bought. It had already been stocked. They knew which product was going to be stocked, and we said “can you give us the buy on each product, which is how much they spent on each product that was gong to market,” which they did.
And so we compared the “buy” to the actual sales. As you can see there, the buyer predictions are on the X axis, and the sale of each product is along the Y. The r is a .53. You might think that that’s great! A .53 correlation. But you might think that it’s not that great when you think about an r-square value, which is the amount of variance that’s accounted for: 28%! So gosh, let’s at least do some research.
Well here are the results from the conjoint. The conscious model accounts for 69% of sales. That’s fantastic. But we know that that’s only System 2 processing.
What happens when we take the implicit and we combine it with the explicit? The r-square goes up to 94%! We’re accounting for 94% of actual in-market sales when we combine these two methods.
So, here’s fashion study number two. Was that just a fluke? it was only a few products.
We did it on a different line of products. Here the r is .93. That’s combined conscious and subconscious data.
Here’s fashion study number three. Combined subconscious and conscious data. The r is .92. And just to show you that it’s not always .9, here’s conscious and subconscious study number four, with an r of .89.
So when we present that, people ask us, “well, fashion is very emotional. Does this work for other categories?”
So we said, “what about oatmeal? How about hot cereal?” Is emotion relevant in that decision? That might evoke some feelings of disgust, actually. So, yes.
But when you involve the brand, it’s probably much more emotional. All of those brands are imbued with emotional value.
So we did a deliberative, choice-based conjoint study. It was a price/pack-size study. And we also measured emotional associations. And in this case, we got sales data. We did the study in quarter four, and we got sales data from Q1, so it’s all forward looking.
Sales data is on the Y, predictions are on the X. This is the conscious model: an r of .64, accounting for 41% of sales. That’s the conjoint.
Here’s the emotional model. An r of .71, accounting for 51% of sales. That’s just the implicit data. So, if you had to choose just one, you’d say “OK, give me the implicit.”
But the point is, you don’t have to choose one. And we shouldn’t choose one. We should combine our best conscious and subconscious methods together.
And when you combine them together, we have an r of .9, predicting 80% of actual market sales for in-market products, before they even occurred.
May I go one more? The crystal ball. The last point that I want to make here against different methods is that, if this represents all sales, and we know that this is conscious preference and this is subconscious implicit, when we combine them, they’re much more accurate together.
But we wanted to know: are all emotional measures equal? So we compared emotional slider scales to conscious preference; because we were interested in how much they would differentiate from conscious preference. We want something that is not the same. So when you use the slider scale, you’re saying, “tell me how you feel about this.”
We did the correlation. The correlation between explicit emotional slider scales and derived preference (that’s not sales, that’s preference) is 89%. What that tells you is that they’re largely measuring the same thing. They’re only 11% unique.
When we did it with fast explicit techniques, there was 85% overlap. Meaning fast explicit techniques (just making a conscious judgement quickly) is only 15% unique from conscious measures.
But, when we compare the true implicit research data to the conscious measures, there’s 34% overlap. What that means is that they are 66% unique. And that’s what you want. You want something that you’re capturing separate, meaningful variance in the behavior of interest.
And is it meaningful? We know from the sales data that it is meaningful variance, not error variance. True implicit has a predictive advantage over fast explicit and other explicit emotional techniques.
By Jeremy CloughJuly 2, 2020Anyone who’s spent significant time in consumer insights, decision science, or behavioral science has probably executed or commissioned a Choice-Based Conjoint (CBC) research project. But no one currently working in these...
By Jeremy CloughJune 18, 2020Dr. Reid explains how to combine business and social KPIs into insights that matter in this Masterclass “Avoid the Damage of Advertising Missteps in This Moment.” at IIEX FORWARD 2020. Researchers today can't simply ask...
By Jeremy CloughJune 12, 2020This informative webinar "Lay Low or Light It Up?" with Dr. Aaron Reid illustrates how testing the emotional impact of advertisements can help brands make sure their message is right for the moment. Dr. Reid highlights the...