I recently came across a technique that was described as modeling “neural networks” by measuring the strength of associations between attributes and brands through response latency techniques. It sounds fancy and has the sniff of scientific validity rising from the reference to neural networks and response latency, so I took a closer look.
The technique presented images or words one by one on the screen and asked participants to indicate whether the image represented the brand under question. This was repeated for several brands using images and words, and the time it took for respondents to make the judgment (i.e. the “response latency”) was taken as a measure of the strength of the relationship.
Are Timed Judgments Really Implicit?
At face value, this seems like a good measure; and that’s because it is a good measure. It is a good conscious measure of brand attribute associations. Essentially, the researchers have done a good job enhancing a stated attribute association question by adding response times to refine the degree of association. Albeit there is some noise in the response time measure they are using, nonetheless, the response times do give additional information on how “easy” it is for participants to make the judgments. However, this measure does not qualify as an implicit research technique.
Why Response Latency Isn’t an Implicit Research Technique
The method succeeds on being indirect, however, it very clearly does not meet the criteria of being uncontrollable. This second criteria is critical for true implicit measures, because these measures are designed to capture System 1 processing. Processing in System 1 is automatic, associative in nature and occurs without our conscious control.
Making a conscious judgment on whether an attribute is associated with a brand is distinctly System 2 in nature, even if the response time is measured. Participants are asked to access their thoughts and make an explicit judgment on the fit between an attribute and a brand. Therefore, this type of technique cannot get around the ‘can’t say’/’won’t say’ issue, and thus does not qualify as an implicit measure.
If we are serious as an applied science, about protecting the accuracy of our conclusions and recommendations, we need to more clearly identify the constructs that new research methods are truly measuring. Enhanced explicit techniques are very valuable as refinements on other explicit measures of attitudes, but are not at the height of research assessing the automatic nature of implicit attitudes.