Beware Amara’s Law: Musings on the 2019 ESOMAR Congress

By Joe Sauer
November 8, 2019

For long as anyone can remember, ESOMAR has been the bellwether of the research industry, successfully charting and, in many cases, catalyzing fundamental change in the direction of marketing research worldwide. For its 2019 Global Congress, ESOMAR’s programme committee selected a theme of Transformation: Provoke. Convince. Create. Impact., providing perhaps no better illustration of their role in this regard.

The Global Congress is one of the few industry events where research agencies cannot pay-to-play; although there is always small exhibition floor, the focus each year is on the typcially stellar lineup of presentations and papers that have been painstakingly curated for their originality, scientific integrity and empirical impact. The curatorial license exercised by the committee is a telling indication of where the most experienced and knowledgeable research practitioners believe that the industry is or should be headed. Even a cursory review of this year’s official programme belies the extent to which this direction is being shaped by technology; a more thorough review reveals that well over half of this year’s presentations were focused on technology-centric topics such as Artificial Intelligence & Big Data, Automation, and Social Media.

“Perhaps never before in the industry has the word “traditional” evoked such pity, derision and, most troublingly, flight of investment capital.”

To be fair, this was neither accidental nor unexpected but rather the reflection of a larger trend. According to ESOMAR’s own Global Market Research Report[1], this year – 2019 – will be the first in which global spending on research analytics (including passive data collection, IT, social media, and web analytics) exceeds spending on traditional survey-based research. Perhaps never before in the industry has the word “traditional” evoked such pity, derision and, most troublingly, flight of investment capital. Attracting both prospect and investor attention now seemingly depend on making machine learning or social listening tools the centerpiece of your research portfolio. Rarely does a day pass without a blog post, conference invitation, or industry award show trumpeting one of the myriad ways in which technology is ostensibly revolutionizing the research industry.

“We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”

In the face of technology’s inexorable onslaught, researchers would do well to bear in mind the words of the noted American researcher, scientist and futurist Roy Amara. “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.[2]” In a desperate chase for differentiation and relevance, researchers are understandably susceptible to the lure of the shiny new thing, rushing to embrace a methodology or strategy that helps address marketing challenges in a new and novel way. Amara’s Law reminds us that, while technology has significantly changed aspects of the way we conduct market research, the fundamental principles of scientifically-valid research have not changed and are, in fact, more important now than ever.

The 2019 Congress provided a useful illustration of the perils of ignoring Amara’s Law. For all of the careful review and curation by the talented and capable programme committee, there were three areas in which it was painfully clear that researchers on the whole need to be more disciplined and rigourous in the application of technology and in the interpretation of the results of technology-driven insights and solutions.

1. Technology has not changed probability theory

Whether data is collected actively with boring old surveys or passively through sexy new social listening platforms (or one of the dozens of other latent data sources), all research analysis is inherently based on sampling – that is, selecting a particular group to represent an entire population. While technology has helped us reach people that we cannot reach with traditional surveys and provided unprecedented access to large, affordable, and self-updating data sets, it has not changed the consequences and constraints of probability sampling.

Social media sentiment analysis offers a compelling example. The consumers who generate the most social media traffic are those who lie at the extreme ends of the brand, product, or experience satisfaction spectrum – they tend to be either evangelists or haters, either promoters or detractors. Although there is little question that social media data are a valuable source of potentially profound insight, the data are clearly derived from a non-probability sample with considerably more inherent bias than convenience or river samples. As such, the insights derived from this data are sourced from subjective methods and generate biased results.

KEY TAKEAWAY: Inferences from social media sentiment analysis that are projected to either the general population or to an organization’s customer/user base are qualitative and exploratory rather than generalizable.

Time and again, however, consumer insights teams hold up the “truths” revealed through social media analysis as reliably conclusive and objective, and they are being heavily leveraged in brand positioning, communications effectiveness, and new offer development processes. The sampling method simply doesn’t justify these interpretations of the results.

Remesh CEO Andrew Konya framed the issue succinctly in an observation of the impact of survey chatbots – another nascent research technology – from the Congress stage, saying “I think bots are a huge concern for representativeness in online research”. Amidst a crowd of technology idolizers, his was a seemingly lonely voice in the wilderness.

2. Technology does not make bad methods better.

The emergence of automated platform solutions was supposed to break the shackles of the “iron triangle”. Coined in project management circles in the 1950s before spreading to engineering, software development, and, inevitably, marketing research, the iron triangle suggests that it is impossible to produce something new that is simultaneously faster, better, AND cheaper – at best, innovators can only fix two outcomes and must let the third be what it will. However, the acceleration of technology capabilities led one industry CTO to exclaim, “the power of advanced technology (is) melting the triangle away…This is playing out with dramatic results (in) consumer insights, and the results are proving to be literally transformative.[3]

Although the observation rests on a legitimate (albeit hyperbolic) foundation, the problem is that the relentless pursuit of automation has relegated any and all scrutiny of method validity squarely to the background. The prevailing mindset seems to be that any research method or process that CAN be automated SHOULD be automated.

For example, automating the process of programming, fielding, cleaning, and reporting response scale questions (including a rich library of visual formats for question presentation and data visualization) may increase respondent engagement and shorten project length, but it does nothing to mitigate against the well-known but too-often ignored scale interpretation biases[4] that render cross-cultural comparisons invalid. Automated DIY and self-service platforms are one of the darlings of the industry at the moment, dominating both product launch announcements and M&A activity.

KEY TAKEAWAY: The industry has missed out on a golden opportunity to increase both the credibility and influence of the consumer insights function as a whole by building critical information about scientific validation, range of applicability, best practices, and use cases into these platforms.

Instead, increasingly non-research users are employing easy-to-use, automated tools without the ability to provide the crucial context around the interpretation of results that an experienced research manager would. Instead of improving the quality of research, in many cases automation has actually helped entrenched retrograde methods. Faster and cheaper, definitely. Better? Not by a long shot.

3. Technology does not replace scientific integrity.

Rapid advances in research technology have engendered a significant proliferation in the number of statistical and analytical methodologies being used by researchers today. Fixed-line and mobile broadband, video & image processing algorithms, speech & facial recognition, massive storage capabilities and sheer computing power (to name but a few) have triggered experimentation on an unprecedented scale. As in most industries, this type of experimentation is a sign of healthy evolution and self-perpetuation. What tends to get lost in the exuberance of discovery, however, is a disciplined fidelity to the crucial scientific underpinnings of any methodological advancement.

This is particularly problematic in research into consumer emotion and its influence over decision-making. We know intuitively as well as scientifically that consumer behavior is heavily influenced by emotional responses that often occur below their perception threshold. Historically, observing sub-conscious response to sensory stimulus required highly specialized experts using expensive and obtrusive techniques such as Electro-encephalography (EEG) or Functional Magnetic Resonance Imaging (fMRI). More recently, however, the development of more scalable and cost-efficient neuro-metric and biometric research techniques have allowed a much wider group of researchers to study the impact of emotion on behavior.

The rapid growth and mainstream acceptance of these “neuro-based” research methods have attracted practitioners with pseudo-scientific or even decidedly unscientific approaches to the measurement of emotion. An all-to-common technique involves the use of emoticons to elicit respondents’ self-reported emotional reactions to, say, a piece of advertising or a new product idea. With no scientific validation, users of these methods claim to be capturing emotional response. The reality is that consumers are unable to fully articulate their sub-conscious responses for the very fact that their response is SUB-conscious – that is, beneath the level of their active consciousness. It would be more correct to say that these techniques capture what respondents THINK (a rational and deliberative System 2 process, subject to filtering through an array of cognitive biases) they FEEL (an automatic and irrepressible System 1 process) – a rather obvious contradiction.

KEY TAKEAWAY: The data generated from pseudo-scientific or unscientific methods, even those approaches hidden in ostensibly sophisticated experimental designs and whiz-bang survey platforms, are simply not isolating and quantifying a respondent’s emotional response to stimulus.

Only techniques that have withstood the brutal examination of scientific peer review & publication AND that have demonstrated empirical validation through the consistent delivery of beneficial business outcomes can claim to be accurately capturing the effect of emotion on consumer decision-making.

It has never really made sense to argue whether technology has a positive or negative influence on the research industry. It is equally senseless to try to analyze technology as a separate and distinct entity that has some exogenous influence over industry norms and practices. Even in an era of survey chatbots, neural networks, and automated facial emotion recognition, research technology itself is completely agnostic – any beneficial or harmful effects lie in its application by research practitioners. As sophisticated as some of these technologies may seem, none of them as they are currently plied in research has true agency properties or the capability for judgement. Until they do, researchers should be mindful to not fall prey to Amara’s Law in overestimating the effect of a technology in the short run. Doing so will ultimately undermine our credibility and constrain our impact in the boardroom. And as for the long run? Well in the long run, as Keynes so pithily put it, we’re all dead[5].




[3] The Marketing Insider, July 9, 2018,

[4] A short sampling of reference material in this area:

  • Morren, M., Gelissen, J., and Vermunt, J. (2012). Sage Publications, Cross-Cultural Research XX(X) 1–25.Response Strategies and Response Styles in Cross- Cultural Surveys.
  • van Herk, H.; Poortinga, Y.H.; Verhallen, T.M.M. (2004). Journal of Cross-Cultural Psychology, 35(3), 346-360. Response styles in rating scales.
  • Baumgartner, H., & Steenkamp J.B.E.M. (2001). Response styles in Marketing Research: A Cross-National Investigation. Journal of Marketing Research, 38(2), 143- 156.
  • Van de Vijver, F.J.R., & Leung K. (1997). Methods and data analysis for cross- cultural research, Handbook of Cross-Cultural Psychology, 2nd Ed., Vol I. Theory and Method, Berry, J.W., Y.H. Poortinga and J. Pandey (Eds.), 257-300. Boston: Allyn and Bacon.

[5] John Maynard Keynes, A Tract On Monetary Reform, MacMilland & Company, 1923.


Joe Sauer


Senior Vice President

Examining Extremes

Examining Extremes

By Jeremy CloughJuly 2, 2020Anyone who’s spent significant time in consumer insights, decision science, or behavioral science has probably executed or commissioned a Choice-Based Conjoint (CBC) research project. But no one currently working in these...

Is Your Marketing Right for the Moment?

Is Your Marketing Right for the Moment?

By Jeremy CloughJune 12, 2020This informative webinar "Lay Low or Light It Up?" with Dr. Aaron Reid illustrates how testing the emotional impact of advertisements can help brands make sure their message is right for the moment. Dr. Reid highlights the...



For more information about Sentient Decision Science and our groundbreaking research please contact us.