Let their fingers do the talking?

10 downloads 11050 Views 163KB Size Report
Psychology and Marketing, 23, pp. .... James Klymowsky is Head of Digital at Seven Stones (UK). ... a senior client, or as a freelance marketing consultant.
International Journal of Market Research Vol. 55 Issue 4

Let their fingers do the talking? Using the Implicit Association Test in market research Aiden P. Gregg

University of Southampton

James Klymowsky, Dominic Owens and Alex Perryman Seven Stones

Self-report methodologies – such as surveys and interviews – elicit responses that are vulnerable to a number of standard biases. These biases include social desirability, self-deception and a lack of self-insight. However, indirect measures, such as the Implicit Association Test (IAT), offer a potential means of bypassing such biases. Here, we evaluate the scope for using the IAT in market research, drawing on recent empirical findings. We conclude that the IAT meets several desirable criteria: it yields consistent results, possesses predictive power, offers unique advantages, is relevant to commercial issues and poses no insuperable challenges to adoption.

Introduction The most common way to tap the thoughts and feelings of participants in market research is to ask them questions (Bradburn et al. 2004). The ubiquity of self-report ‘techniques’ – including surveys and interview – confirms the general consensus: that what people say about themselves is revealing. Yet not always: self-reports can sometimes yield biased or false data. For one thing, the manner in which researchers ask a question has long been known to shape the nature of the answer respondents provide (Schuman & Presser 1996). In addition, three major biases can compromise the validity of self-reports. First, respondents may give socially desirable answers to sensitive questions (Steenkamp et al. 2010; Tourangeau & Ting 2007). For

Received (in revised form): 21 October 2012

© 2013 The Market Research Society DOI: 10.2501/IJMR-2013-013

23

Let their fingers do the talking?

example, enquiries about ‘green’ or other cause-related topics can elicit answers in keeping with what respondents believe researchers want to hear (Nancarrow et al. 2001). Second, respondents may deceive themselves into believing they hold appropriate attitudes even when they do not actually hold them (Greenwald 1980). For example, a respondent may wish to flatter themselves into believing (Sedikides & Gregg 2008) that they are ‘greener’ than their consumer behaviour would indicate. Finally, respondents may simply lack insight into what their real attitudes are (Wilson 2002). For example, respondents may lack any considered opinion on ‘green’ issues, and instead make up something on the spot (Converse 1970). In all three cases, respondents’ explicit attitudes – those they overtly state or consciously believe – do not fully correspond with their implicit attitudes – their concealed or unconscious counterparts. Knowing that self-report need not tell the whole story, researchers have developed alternative assessment techniques. Some, such as randomised response procedures (Lensvelt-Mulders et al. 2005) discourage socially desirable answers. Others, such as projective techniques (Mariampolski 2001), encourage spontaneous truthful admissions. Here, we focus on another class of techniques: indirect measures (Wittenbrink & Schwarz 2007). In particular, we address the Implicit Association Test (IAT: Greenwald et al. 1998; Gregg 2008), and its application to market research (Dimofte 2010). Indirect measures, in general, can offer several benefits to researchers. First, many indirect measures are unobtrusive. This means that respondents are unlikely to suspect they are being assessed. Hence, they are unlikely to manipulate their responses. Second, many indirect measures are robust. This means that, even when respondents are aware that they are being assessed, the technique resists attempts at manipulation (as is the case with the IAT). Finally, all indirect measures, given their special modus operandi, tend to tap in to the more primitive background of the mind, as opposed to a more sophisticated foreground (Searle 1992; Greenwald & Banaji 1995; Kahneman 2011). This means that such techniques reflect mental processes that are, among other things, relatively unconscious, fast, habitual, associative, impulsive or implicit, as opposed to relatively conscious, slow, novel, propositional, reflective or explicit. True, the distinction is not absolute. For example, explicit and implicit attitudes on many topics correlate reasonably well (Nosek 2007). Still, there is enough of a distinction between the background and foreground of the mind to ensure that this is not always so, with interesting implications for market research.

24

International Journal of Market Research Vol. 55 Issue 4

The upshot, as we shall see, is this: relative to direct measures such as self-report, indirect measures such as the IAT can be more valid (because they are more unobtrusive or robust) and more informative (because they reflect background processes) than self-report. This combination of virtues makes them a potentially useful addition to the toolkit of the market researchers. We do not go so far as to claim that the results of the IAT are generally more ‘real’ than the results of self-report. However, we do go so far as to claim that they can under many circumstances provide extra insight.

The Implicit Association Test (IAT) Like many other indirect measures (Fazio 2007; Wittenbrink & Schwarz 2007), the IAT relies on respondents’ reaction time (RT) while providing responses rather than on the content of the responses provided. It is precisely this feature that makes indirect measures indirect. The IAT, run on computer, takes the form of a rapid-fire sorting task. Respondents are instructed to tap different keys to sort items – either words or pictures – into four corresponding categories, as quickly and accurately as possible. The categories are presented in two different configurations, or IAT blocks. The average response time for each block is measured and compared. Quicker sorting in one block versus the other indicates a particular pattern of associations. Such associations, in turn, reflect implicit attitudes. So, any differences in average RT are interpreted as indicating that respondents mentally associated category items in one way rather than another. Any respondent who makes too many errors in either block can be excluded or required to redo the IAT. Here is a concrete illustration. Suppose a market researcher wished to assess implicit attitudes towards the rival cola brands Coke and Pepsi. In particular, suppose she were interested in two dimensions of those attitudes, one specific – value for money – and the other general – overall favourability. The researcher could capture the value for money dimension using the attribute categories Cheap and Expensive and the overall favourability dimension using attribute categories Better and Worse. The different sorting configurations, or IAT blocks, would enable the products Coke and Pepsi to be evaluated on these dimensions relative to one another. If respondents sorted items more quickly when Coke and Cheap and Pepsi and Expensive were paired, than they did when Coke and Expensive and Pepsi and Cheap were paired, this would suggest that they implicitly regarded Coke as offering superior value for money over Pepsi. Similarly, if respondents sorted items more quickly when Coke and Better and Pepsi

25

Let their fingers do the talking?

and Worse were presented than they did for Coke and Worse and Pepsi and Better then this would suggest they implicitly regarded Coke more favourably overall than Pepsi (i.e. ‘more’ better and ‘less’ worse). This brings us to the question of what it means when explicit attitudes (e.g. as measured by self-report) and implicit attitudes (e.g. as measured by the IAT) converge or diverge. In the case of convergence, a researcher has a ‘two thumbs up’ situation. Two sources of data reinforce each other. The background and foreground of the mind are probably aligned. The biases of social desirability, self-deception and lack of self-insight are likely to be absent. Hence, a researcher can have relatively greater trust in the validity of their attitude assessment. In the case of divergence, the two sources of data are contradicting one another, suggesting that the self-report may be compromised by one or another response bias. Hence, the validity of the explicit attitude assessment becomes suspect, and further investigation may be warranted.

What value does the IAT have to offer market researchers? The value of any research tool – including the IAT in market research – depends upon several factors. Chief among them are yielding consistent results, having predictive power, offering unique advantages, showing applied promise and not presenting any insuperable challenges. Below we examine how the IAT fares in these critical regards.

Consistent results First, IAT results show good levels of reliability (internal consistency). Scores from parallel halves of the same IAT correspond well (average α = 0.79; Hofmann et al. 2005). Second, IAT results also show satisfactory levels of reproducibility. Scores obtained on different occasions correspond moderately well (average r = 0.56; Nosek et al. 2007). True, analogous values for direct measures are often even higher (Anastasi 1996). However, as an indirect measure relying on RTs, which are intrinsically variable, the IAT performs more than adequately, and outperforms other indirect measures (De Houwer & De Bruycker 2007).

Algorithmic robustness One might suspect that, because IAT effects involve RT, individual differences in the swiftness or sluggishness of respondents might diminish

26

International Journal of Market Research Vol. 55 Issue 4

or augment the magnitude of the IAT effects they exhibit. Indeed, some evidence supports this suspicion. For example, IATs that explore unrelated topics yield effects that are nonetheless correlated (McFarland & Crouch 2002). This could only be the case if such IATs were capturing respondent-specific variance. Note, however, that the IAT index is not a simple RT, but a within-respondent difference in RT (i.e. average RT in one block minus average RT in the other). As such, it already partly controls for individual differences. However, it is possible to further control for them by additionally computing an estimate of variability in RT across the IAT as a whole, and then dividing the IAT index by that estimate. This is the basis of the so-called D-algorithm (Greenwald et al. 2003). Use of this algorithm has been empirically shown to minimise the impact of individual differences (Cai et al. 2004). Thus, IAT effects can be computed using a robust algorithm that controls for simple confounds. Nonetheless, it is worth noting that, however it is computed, no IAT score is by itself self-explanatory. Each stands in need of contextual interpretation. This is because the IAT – unlike, say, an IQ test or a pregnancy test – does not come in one definitive version that has previously been administered to a large relevant population. Hence, the IAT is not formally a ‘test’ (Fiedler et al. 2006). For example, if someone scored 140 on a standardised IQ test, that would put them in the top 1% of test-takers; or if a newlywed bride tested positive on a proven pregnancy test, a new baby would be on the cards. In contrast, if respondents showed an average score of X ms on a given IAT, then the diagnostic implications would be less clear. Variations across different IATs ensure that no score carries one universal meaning (Blanton & Jaccard 2006). That said, plausible interpretations are still possible. For example, a raw IAT score near 500 ms would nearly always indicate a large and important effect, and a raw IAT score near 0 ms a trivial or null effect. In addition, one could use scores from control IATs that show large implicit preferences (e.g. Coke vs. Water) or small implicit preferences (e.g. Coke vs. Pepsi) to contextualise scores from IATs investigating unknown implicit preferences (e.g. Coke vs. New Cola) (cf. Gregg & Lepore 2012).

Predictive power If the IAT can predict outcomes of consequence at above chance level, then its validity would be fundamentally vindicated (Greenwald et al. 2006). An interim meta-analysis of 122 relevant IAT studies shows that this is so on the whole (Greenwald et al. 2009). Moreover, in 32 of these studies

27

Let their fingers do the talking?

exploring racial prejudice – a topic where one might anticipate response biases – the validity of the IAT significantly exceeded that of self-report. It is not implausible to suppose that, here, white respondents, completing sensitive self-report measures of anti-black sentiment, might have provided answers somewhat tainted by social desirability and self-deception, thereby explaining the predictive underperformance of those measures. The claim that the IAT resists the effects of social desirability is already supported by research showing that its results are relatively hard to fake (Steffens 2004). Furthermore, spontaneous attempts to produce false results on the part of naive respondents typically fail (Kim 2003). If respondents are told how the IAT works, and given the opportunity to practise, then it is possible to engineer false results (Fiedler & Bluemke 2005), but such determined fakery can usually be detected from tell-tale patterns in respondents’ RT data (Cvencek et al. 2010). If the IAT is robust to social desirability concerns, it has a promising role to play as an assessment tool where sensitive commercial topics – such as personal hygiene, weight loss, contraceptive use and alcohol consumption – are being investigated. Some evidence already illustrates this value. For example, Richetin and colleagues (2007) used the IAT to assess participants’ implicit preferences for fruit – a ‘healthy’ food people should eat – or candy – an ‘unhealthy’ food they should avoid. Their results showed that the IAT predicted food choice over and above explicit measures.

Unique advantages Even though the IAT is robust against social desirability – and hence can be more valid than self-report – its unique advantages are perhaps still to be traced to its capacity to tap in to the background of the mind, and thereby furnish unique information that would not otherwise be available. Below, we provide several examples where this capacity has been empirically demonstrated.

Detecting unconscious attitudes If the IAT can detect attitudes of which people are themselves unconscious, then it should also have the capacity to predict behaviours stemming from those attitudes that respondents themselves cannot predict. And it does. For example, IAT scores predicted which of two candidates – the right-wing Silvio Berlusconi or the left-wing Francesco Rutelli – initially undecided voters later voted for in a Milanese election (Arcuri et al. 2008).

28

International Journal of Market Research Vol. 55 Issue 4

Moreover, this finding was not restricted to voting: it also applied to Italian respondents undecided about the opening of a US military base in Vicenza (Galdi et al. 2008). Hence, there is reason to believe that the IAT might also, in cases where consumers are explicitly undecided about which of two rival products to purchase, predict what decision they will ultimately make, by detecting underlying propensities that those consumers cannot mentally access.

Uncovering habitual preferences Whereas deliberate behaviour – rooted in careful reflection – involves foreground mental processes, habitual behaviour – requiring little thought – involves background mental processes. The IAT should therefore ‘specialise’ in predicting habitual behaviour. Conner and colleagues (2007) identified one way in which it does so. They assessed the degree to which participants were in the habit of eating fruits or snacks. They then found that, the stronger the habit, the better the IAT predicted the consumption of either one or the other, both in everyday life and under controlled laboratory conditions. Thus, the IAT may be of interest to market researchers interested in diagnosing habits, including habits respondents might not wish to report overtly because they might be embarrassed to admit to them.

Unmasking hidden impulses Impulsive behaviour involves background mental processes, too. One way to make such background processes predominate is to make laboratory participants mentally busy. This can be done by having them complete a taxing task. The ‘cognitive load’ imposed by such a task depletes mental resources. This selectively impairs the foreground processes, as more primitive background processes need fewer mental resources to operate. Under cognitive load, consumer decisions indeed become more impulsive (Vohs & Faber 2007). In this connection, Hofmann et al. (2007) showed that, after participants had performed a task that mentally ‘wore them out’ (see Baumeister et al. 2008) – thereby making them more impulsive – the IAT became a relatively better predictor of candy consumption. Note that, when reporting explicit attitudes, respondents are not usually under cognitive load. Hence, at that time, they may lack introspective insight into their impulsive behaviour, and fail to predict it well, as some independent research confirms (Ariely & Loewenstein 2006).

29

Let their fingers do the talking?

Applied promise in market research The above studies bear out the promise of the IAT generally. We now focus on studies where the IAT has been directly employed to address issues arising in marketing or consumer research.

Differentiating competing brands Using consecutive IATs, Gattol et al. (2011) assessed respondents’ implicit perceptions for one car brand versus two others. They examined six different attributes represented in three different ways and some intriguing findings emerged, e.g. Ford was more ‘aggressive’ than Audi, which was less ‘aggressive’ than BMW. These findings generalised across different stimuli (e.g. words or pictures) and individual IATs proved to be internally consistent. In addition, Priluck and Till (2010) examined the capacity of the IAT and self-report to distinguish between (a) a high-equity and low-equity brand, and (b) two high-equity brands. Whereas self-report did the former, only the IAT did the latter. Respondents in market research may struggle to articulate distinctions between brands that differ only subtly; the IAT shows promise as a robust technique that can do so.

Intangible brand values The IAT has been employed to uncover intangible brand values that respondents might not be willing or able to express. For example, Friese et al. (2006) found that, although participants explicitly preferred either branded or generic products about equally, the greater majority of them implicitly preferred branded products. In the same vein, Perkins et al. (2008) found that, although Polish respondents reported preferring foreign consumer products (e.g. Marlboro cigarettes), they nonetheless showed an implicit preference for domestic products (e.g. Sobieski cigarettes). Again, IAT findings suggest that the picture provided by conventional self-report instruments may be incomplete.

Segmenting consumer groups Brunel et al. (2004) showed that IAT scores could be used to segment consumer groups. They found, for example, that Apple Mac users showed stronger implicit preferences for their machines than Microsoft PC users did for theirs. This supports the anecdotal contention that Apple Mac users

30

International Journal of Market Research Vol. 55 Issue 4

identify more strongly with their brand. Thus, the IAT may serve as a good promising index of brand loyalty. These researchers also found that white and black respondents showed different implicit and explicit reactions to advertisements for sports footwear. On the one hand, blacks but not whites reported an explicit preference for advertisements with black spokespersons. On the other hand, whites but not blacks showed an implicit preference for advertisements with white spokespersons. One interpretation of this result is that white respondents were reluctant to report a racial preference that they possessed at a background level, whereas black respondents reported a racial preference they did not really have at a background level. Had an indirect measure such as an IAT not been employed, such subtle possibilities – which can critically inform market segmentation on the basis of demographics – would have been less apparent.

Tapping in to experiential attitudes Research shows that attitudes rooted in concrete experience tend to be stronger and more predictive of behaviour (Fazio et al. 1982). Larger IAT effects seem to reflect the presence of such attitudes. For example, in one study, Maison et al. (2004) separated participants who could distinguish Coke and Pepsi in a blind taste test from those who could not – a sign of differences in experiential familiarity. They found that, whereas the ‘distinguishers’ had stronger implicit preferences than ‘non-distinguishers’, their explicit preferences hardly differed. Hence, the IAT may pick up potentially predictive information about consumer attitudes that self-reports do not.

Tapping in to ambivalent attitudes We mentioned earlier that, if explicit and implicit attitudes converge, one has a reassuring ‘two thumbs up’ situation. In contrast, if they diverge, complications are likely. In particular, divergence between explicit and implicit attitudes may reflect psychological ambivalence, which in turn may reduce the predictability of consumer choices from explicit attitudes alone. For example, Friese et al. (2006) found that, when participants’ explicit and implicit preferences matched, their explicit preferences reliably predicted whether they chose a branded or generic product, regardless of whether they chose slowly (i.e. reflectively) or quickly (i.e. impulsively).

31

Let their fingers do the talking?

However, when participants’ explicit and implicit preferences clashed, their explicit preferences predicted whether they chose a branded or generic product only if they chose slowly; if they chose quickly, then explicit preferences were no longer predictive. The implication is that, when one has a ‘one thumb up, one thumb down’ situation, explicit preferences may no longer predict consumer choices under conditions where background processes prevail – such as when people make impulsive decisions. If this finding generalises, then explicit attitudes towards products bought on impulse – such as candy at a checkout, or items whose availability is limited – will be a poorer guide to purchasing behaviour when people’s implicit attitudes towards those products point in the opposite direction.

Large effects Finally, the IAT delivers a lot of ‘bang for one’s buck’. Reliable effects emerge after only a handful of trials, which means that, unlike with some other indirect measures, the performance of individual respondents can be meaningfully differentiated (De Houwer & De Bruycker 2007). Moreover, the IAT is ‘well tolerated’: respondents typically find it easy to complete and do so eagerly. For example, hundreds of thousands have completed IATs online, out of sheer curiosity (Nosek et al. 2002; for user-friendly demonstrations, see also http://www.implicitresearch.co.uk/test/).

Challenges to adoption Clearly, the IAT possesses some unique advantages, and shows promise in market research specifically. However, like any research instrument, it comes with a number of challenges of its own, which must be overcome if it is to be widely adopted. The IAT’s first challenge is logistical. Indirect measures like the IAT, because they collect RT information over many trials, take more time to run. They also require some technical apparatus to run on. Hence, market researchers must economise as regards which implicit attitudes they choose to study. This said, many procedures that add diagnostic value (e.g. eye-tracking) would be used sparingly, and this is hardly an argument against using them. Moreover, research studies show the feasibility of running several IATs back-to-back (Gattol et al. 2011). A second challenge is structural. The IAT, in its original form, lends itself most naturally to research where the relative merits of two key targets (or a small number of key targets, if several IATs are used) are being assessed.

32

International Journal of Market Research Vol. 55 Issue 4

This is because each IAT has an intrinsically this-versus-that format. Before a researcher can examine associations towards this – some target of interest – they must first identify some that – an alternative to contrast it against. Although this structural constraint can usually be satisfied, it is undeniably a drawback. A researcher must artfully devise contrasting category names (and items belonging to those categories) to capture a dimension of interest (e.g. Better and Worse for favourability). Constructing high-quality IATs demands a degree of semantic intuition. However, IAT researchers have long been aware of this constraint, and have sought to transcend it by developing novel IAT variants. Perhaps the leading variant is the Single-Category IAT (SC-IAT; Karpinski & Steinman 2006; Bluemke & Friese 2008). This functions exactly like a standard IAT, but features three categories instead of four. The SC-IAT successfully predicts past and future purchasing behaviour (Steinman & Karpinski 2008; see also Hofmann et al. 2007; Galdi et al. 2008). Another variant is the Simple IAT (Blanton et al. 2006, Study 2). The idea here is to replace two of the four IAT categories with unrelated neutral categories, so that the association between remaining two target categories can be selectively assessed. A third variant is the Go No-Go Association Test (GNAT: Nosek & Banaji 2001). This features only two categories per block. Respondents’ goal is to classify items accurately within a tight time limit rather than quickly without making errors. Like the original IAT, the GNAT possesses incremental predictive validity (Eastwick et al. 2011). These and other IAT variants (e.g. Sriram & Greenwald 2009) all provide ways to get around the structural limitation of the original IAT. Even more desirable would be an IAT variant that permits the associations between any number of targets and attributes to be flexibly assessed. With this in mind, two of the present authors have developed a truly ‘Multiple’ Implicit Association test (MIAT; Klymowsky & Gregg 2012), in which respondents classify two items per trial into multiple possible categories. A third challenge is conceptual. The IAT is not a pure measure of association. It has confounds of its own. Studies show, for example, that IAT effects can be partly driven by differences in how much categories on one side of the screen stand out relative to categories on the other side of the screen – a confound technically known as salience asymmetry (Rothermund & Wentura 2004; see also Han et al. 2010, for another confound). Although researchers differ over how much of a problem this presents (Greenwald et al. 2005) it can be reduced by choosing categories appropriately (e.g. using the attribute categories Safe vs. Risky rather than Safe vs. Unsafe, given that the ‘Un-’ prefix makes a category stand out)

33

Let their fingers do the talking?

or by also including an extra control IAT in their battery (e.g. featuring the attribute categories Word vs. Non-Word, which would reflect salience asymmetry only). Furthermore, the fact that the IAT proves meaningfully predictive (Greenwald et al. 2009) proves that the IAT is not fatally undermined by any confound. Finally, Gregg and Lepore (2012) recently showed that IATs reflect more than just salience asymmetries. Half their participants completed two target IATs designed to capture meaningful associations, one towards health and disease (categories: Health, Disease, Good, Bad), the other towards even numbers and odd numbers (categories: Even, Odd, Good, Bad). The remaining participants completed two control IATs, designed to capture salience asymmetries only (categories: Health, Disease, Word, Non-Word) and (categories: Even, Odd, Word, Non-Word). Whereas on the target IATs, the Health-Disease preference was stronger than the Even-Odd preference, on control IATs, the opposite pattern emerged. This proves that implicit preferences cannot be reduced to salience asymmetry artefacts, as IATs designed to capture each produce effects that go in opposite directions.

Our experience of the IAT The authors have successfully employed the IAT in commercial research at Seven Stones.1 Some of their studies have taken the form of wide-scale surveys, where the IAT has been delivered remotely online. Others have taken the form of qualitative investigations, where the IAT has been delivered face to face. In the latter case, the IAT results can be communicated to respondents immediately, and be the basis of further discussion. In this way, the IAT facilitates the collection of further qualitative data that would otherwise be missed. Of course, when administered as part of a survey, the IAT results can be analysed at an aggregate level, without providing individual feedback. So far, the authors have used the IAT primarily to explore brand perceptions, and other marketing concepts, among physicians and nurses who administer prescription drugs. The goal was to check for attitudes they might be either reluctant to express or unable to mentally access. The impetus for undertaking such research was the growing recognition that non-rational factors – driven by the background processes we referred to above – can influence administration decisions (Kelly & Rupert 2009; Rod & Saunders 2009).   The first author served as an external consultant for some of this research.

1

34

International Journal of Market Research Vol. 55 Issue 4

In one case, we used the IAT to explore physicians’ attitudes to different drugs, in an effort to explain why their typical prescribing practice was not in accord with the UK’s National Institute of Clinical Excellence (NICE) guidelines. In the therapy area in question, the guidelines stated that physicians should prescribe a drug from one initial class, I, and then, if no improvements were seen, prescribe drugs from a subsequent class, S. However, if a drug from class I failed, physicians typically switched to another drug from that class, rather than any drug from the ‘next’ class, S. When asked about this, physicians reported an explicit preference for all drugs in class I over class S, on each of three dimensions investigated – namely, efficacy, manageability and trustworthiness. However, the IAT uncovered one implicit preference in the reverse direction: class S drugs were construed as implicitly more efficacious than class I drugs. Further analysis and qualitative discussion led to the conclusion that physicians were following a non-rational prescribing heuristic: they were unwilling to prescribe drug Z too early because, considering it more efficacious, they wished to hold it in reserve. Their previous self-reports of efficacy may have been contaminated by social desirability concerns or by defensive rationalisation.

Conclusion Self-report methodologies, though useful, are vulnerable to several biases. These biases include social desirability, self-deception and a lack of self-insight. However, indirect measures, such as the Implicit Association Test (IAT), offer a potential means of bypassing such biases. Here, we have evaluated the scope for using the IAT in market research, drawing on recent empirical findings. We conclude that the IAT meets several desirable criteria: it yields consistent results, possesses predictive power, offers unique advantages, shows applied promise in market research, and poses no challenges to adoption that cannot be overcome.

References Anastasi, A. (1996) Psychological Testing (7th edn). New York: Macmillan. Arcuri, L., Castelli, L., Galdi, S., Zogmaister, C. & Amadori, A. (2008) Predicting the vote: implicit attitudes as predictors of the future behavior of the decided and undecided voters. Political Psychology, 29, pp. 369–387. Ariely, D. & Loewenstein, G. (2006) The heat of the moment: the effect of sexual arousal on sexual decision making. Journal of Behavioral Decision Making, 19, pp. 87–98. Baumeister, R.F., Sparks, E.A., Stillman, T.F. & Vohs, K.D. (2008) Free will in consumer behavior: self-control, ego depletion, and choice. Journal of Consumer Psychology, 18, pp. 4–13.

35

Let their fingers do the talking?

Blanton, H. & Jaccard, J. (2006) Arbitrary metrics in psychology. American Psychologist, 61, pp. 27–41. Blanton, H., Jaccard, J., Gonzales, P. & Christie, C. (2006) Decoding the Implicit Association Test: implications for criterion prediction. Journal of Experimental Social Psychology, 4, pp. 192–212. Bluemke, M. & Friese, M. (2008) Reliability and validity of the Single-Target IAT (ST-IAT): assessing automatic affect towards multiple attitude objects. European Journal of Social Psychology, 38, 977–997. Bradburn, N.M., Sudman, S. & Wansink, B. (2004) Asking Questions: The Definitive Guide to Questionnaire Design – for Market Research, Political Polls, and Social and Health Questionnaires (rev. edn). San Francisco, CA: Jossey-Bass. Brunel, F.F., Tietje, B.C. & Greenwald, A.G. (2004) Is the Implicit Association Test a valid and valuable measure of implicit consumer social cognition? Journal of Consumer Psychology, 14, pp. 385–404. Cai, H., Sriram, N., Greenwald, A.G. & McFarland, S.G. (2004) The Implicit Association Test’s D measure can minimize a cognitive skill confound: comment on McFarland and Crouch (2002). Social Cognition, 22, pp. 673–684. Conner, M., Perugini, M., O’Gorman, R., Ayres, K. & Prestwich, A. (2007) Relations between implicit and explicit measures of attitudes and measures of behavior: evidence of moderation by individual difference variables. Personality and Social Psychology Bulletin, 33, pp. 1727–1740. Converse, P.E. (1970) Attitudes and nonattitudes: continuation of a dialogue. In: E.R. Tufte (ed.) The Quantitative Analysis of Social Problems. Reading: Addison-Wesley. Cvencek, D., Greenwald, A.G., Brown, A.S., Gray, N.S. & Snowden, R.J. (2010) Faking of the Implicit Association Test is statistically detectable and partly correctable. Basic and Applied Social Psychology, 32, pp. 302–314. De Houwer, J. & De Bruycker, E. (2007) The Implicit Association Test outperforms the Extrinsic Affective Simon Task as an implicit measure of inter-individual differences in attitudes. British Journal of Social Psychology, 46, pp. 401–421. Dimofte, C.V. (2010) Implicit measures of consumer cognition: a review. Psychology and Marketing, 27, pp. 921–937. Eastwick, P.W., Eagly, A.H., Finkel, E.J. & Johnson, S.E. (2011) Implicit and explicit preferences for physical attractiveness in a romantic partner: a double dissociation in predictive validity. Journal of Personality and Social Psychology, 101, pp. 993–1011. Fazio, R.H. (2007) Attitudes as object-evaluation associations of varying strength. Social Cognition, 25, pp. 603–637. Fazio, R.H., Chen, J., McDonel, E.C. & Sherman, S.J. (1982) Attitude accessibility, attitudebehavior consistency and the strength of the object-evaluation association. Journal of Experimental Social Psychology, 18, pp. 339–357. Fiedler, K. & Bluemke, M. (2005) Faking the IAT: aided and unaided response control on the implicit association tests. Basic and Applied Social Psychology, 27, pp. 307–316. Fiedler, K., Messner, C. & Bluemke, M. (2006) Unresolved problems with the ‘I’, the ‘A’, and the ‘T’: a logical and psychometric critique of the Implicit Association Test (IAT). European Review of Social Psychology, 17, pp. 74–147. Friese, M., Wanke, M. & Plessner, H. (2006) Implicit consumer preferences and their influence on product choice. Psychology and Marketing, 23, pp. 727–740. Galdi, S., Arcuri, L. & Gawronski, B. (2008) Automatic mental associations predict future choices of undecided decision-makers. Science, 321, pp. 1100–1102.

36

International Journal of Market Research Vol. 55 Issue 4

Gattol, V., Sääksjärvi, M.C. & Carbon, C.C. (2011) Extending the Implicit Association Test (IAT): assessing consumer attitudes based on multi-dimensional implicit associations. Plos ONE, 6, e15849. Greenwald, A.G. (1980) The totalitarian ego: fabrication and revision of personal history. American Psychologist, 35, pp. 603–618. Greenwald, A.G. & Banaji, M.R. (1995) Implicit social cognition: attitudes, self-esteem, and stereotypes. Psychological Review, 102, pp. 4–27. Greenwald, A.G., Nosek, B.A. & Banaji, M.R. (2003) Understanding and using the Implicit Association Test: I. An improved scoring algorithm. Journal of Personality and Social Psychology, 85, pp. 197–216. Greenwald, A.G., Nosek, B.A. & Sriram, N. (2006) Consequential validity of the Implicit Association Test: comment on the article by Blanton and Jaccard. American Psychologist, 61, pp. 56–61. Greenwald, A.G., Nosek, B.A., Banaji, M.R. & Klauer, K.C. (2005) Validity of the salience asymmetry interpretation of the IAT: Comment on Rothermund and Wentura (2004). Journal of Experimental Psychology: General, 134, pp. 420–425. Greenwald, A.G., Poehlman, T.A., Uhlmann, E. & Banaji, M.R. (2009) Understanding and using the Implicit Association Test: III. Meta-analysis of predictive validity. Journal of Personality and Social Psychology, 97, pp. 17–41. Gregg, A.P. (2008) The Implicit Association Test: oracle of the unconscious or deceiver of the unwitting? The Psychologist, 21, pp. 762–767. Gregg, A.P. & Lepore, L. (2012) ‘It don’t matter if you’re black or white’? Putting the race IAT into context. Unpublished manuscript, University of Southampton. Han, H.A., Czellar, S., Olson, M.A. & Fazio, R.H. (2010) Malleability of attitudes or malleability of the IAT? Journal of Experimental Social Psychology, 46, pp. 286–298. Hofmann, W., Rauch, W. & Gawronski, B. (2007) And deplete us not into temptation: automatic attitudes, dietary restraint, and self-regulatory resources as determinants of eating behavior. Journal of Experimental Social Psychology, 43, pp. 497–504. Hofmann, W., Gawronski, B., Gschwendner, T., Le, H. & Schmitt, M. (2005) A meta-analysis on the correlation between the Implicit Association Test and explicit self-report measures. Personality and Social Psychology Bulletin, 31, pp. 1369–1385. Kahneman, D. (2011) Thinking, Fast and Slow. New York: Farrar, Strauss, Giroux. Karpinski, A. & Steinman, R.B. (2006) The Single Category Implicit Association Test as a measure of implicit social cognition. Journal of Personality and Social Psychology, 91, pp. 16–32. Kelly, D. & Rupert, E. (2009) Professional emotions and persuasion: tapping non-rational drivers in health-care market research. Journal of Medical Marketing: Device, Diagnostic and Pharmaceutical Marketing, 9, pp. 3–9. Kim, D. (2003) Voluntary controllability of the Implicit Association Test (IAT). Social Psychology Quarterly, 66, pp. 83–96. Klymowsky, J. & Gregg, A.P. (2012) The Flexible Implicit Association Test. White paper. London: Seven Stones. Lensvelt-Mulders, G.J.L.M., Hox, J., van der Heijden, P.G.M. & Maas, C.J.M. (2005) Metaanalysis of randomized response research. Sociological Methods and Research, 33, pp. 319–348. Maison, D., Greenwald, A.G. & Bruin, R.H. (2004) Predictive validity of the Implicit Association Test in studies of brands, consumer attitudes, and behavior. Journal of Consumer Psychology, 14, pp. 405–415. Mariampolski, H. (2001) Qualitative Market Research: A Comprehensive Guide. Thousand Oaks, CA: Sage. McFarland, S.G. & Crouch, Z. (2002) A cognitive skill confound on the Implicit Association Test. Social Cognition, 20, pp. 483–510.

37

Let their fingers do the talking?

Nancarrow, C., Brace, I. & Wright, L.T. (2001) ‘Tell me lies, tell me sweet little lies’: dealing with socially desirable responses in market research. Marketing Review, 2, pp. 55–60. Nosek, B.A. (2007) Implicit–explicit relations. Current Directions in Psychological Science, 16, pp. 65–69. Nosek, B.A. & Banaji, M.R. (2001) The go/no-go association task. Social Cognition, 19, pp. 625–666. Nosek, B.A., Banaji, M.R. & Greenwald, A.G. (2002) Harvesting implicit group attitudes and beliefs from a demonstration website. Group Dynamics, 6, pp. 101–115. Nosek, B.A., Greenwald, A.G. & Banaji, M.R. (2007) The Implicit Association Test at age 7: a methodological and conceptual review. In: J.A. Bargh (ed.) Automatic Processes in Social Thinking and Behavior. New York: Psychology Press, pp. 265–292. Perkins, A., Forehand, M., Greenwald, A.G. & Maison, D. (2008) The influence of implicit social cognition on consumer behavior: measuring the non-conscious. In: C. Haugtvedt, P. Herr & F. Kardes (eds) Handbook of Consumer Psychology. Hillsdale, NJ: Lawrence Erlbaum Associates, pp. 461–475. Priluck, R. & Till, B.D. (2010) Comparing a customer-based brand equity scale with the Implicit Association Test in examining consumer responses to brands. Journal of Product Brand Management, 17, pp. 413–428. Richetin, J., Perugini, M., Prestwich, A. & O’Gorman, R. (2007) The IAT as a predictor of spontaneous food choice: the case of fruits versus snacks. International Journal of Psychology, 42, pp. 166–173. Rod, M. & Saunders, S. (2009) The informative and persuasive components of pharmaceutical promotion: an argument for why the two can coexist. International Journal of Advertising, 28, pp. 313–349. Rothermund, K. & Wentura, D. (2004) Underlying processes in the Implicit Association Test (IAT): dissociating salience from associations. Journal of Experimental Psychology: General, 133, pp. 139–165. Schuman, H. & Presser, S. (1996) Questions and Answers in Attitude Surveys: Experiments on Question Form, Wording, and Context. New York: Academic Press. Searle, J. (1992) The Rediscovery of the Mind. Cambridge, MA: MIT Press. Sedikides, C. & Gregg, A.P. (2008) Self-enhancement: food for thought. Perspectives on Psychological Science, 3, pp. 102–116. Sriram, N. & Greenwald, A.G. (2009) The Brief Implicit Association Test. Experimental Psychology, 56, pp. 283–294. Steenkamp, J.E.M., de Jong, M.G. & Baumgartner, H. (2010) Socially desirable response tendencies in survey research. Journal of Marketing Research, 47, pp. 199–214. Steffens, M.C. (2004) Is the Implicit Association Test immune to faking? Experimental Psychology, 51, pp. 165–179. Steinman, R.B. & Karpinski, A. (2008) The single category Implicit Association Test (SC-IAT) as a measure of implicit consumer attitudes. European Journal of Social Sciences, 7, pp. 32–42. Tourangeau, R. & Ting, Y. (2007) Sensitive questions in surveys. Psychological Bulletin, 133, pp. 859–883. Vohs, K.D. & Faber, R.J. (2007) Spent resources: self-regulatory resource availability affects impulse buying. Journal of Consumer Research, 33, pp. 537–547. Wilson, T.D. (2002) Strangers to Ourselves: Discovering the Adaptive Unconscious. Cambridge, MA: Harvard University Press. Wittenbrink, B. & Schwarz N. (eds) (2007) Implicit Measures of Attitudes. New York: Guilford Press.

38

International Journal of Market Research Vol. 55 Issue 4

About the authors Dr Aiden P. Gregg is a lecturer in psychology at the University of Southampton, where he is a member of the Centre for Research on Self and Identity. His research interests include indirect measures of attitude and self-related motivations. James Klymowsky is Head of Digital at Seven Stones (UK). He has engineered platforms and provided solutions for some of the largest pharmaceutical companies in the world. Dominic Owens is Head of Strategy at Seven Stones (UK). He has 20 years of experience in branding and communication, whether in-house, as a senior client, or as a freelance marketing consultant. Alex Perryman is Chairman of Seven Stones (UK), a pharmaceutical advertising agency based in London. Address correspondence to: Aiden P. Gregg, University of Southampton, Department of Psychology, Highfield Campus, Southampton SO17 1BJ, United Kingdom. Email: [email protected]

39