Face in the Crowd Effect | My Assignment Tutor

See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/41415759The Face in the Crowd Effect: Anger Superiority When Using Real Faces andMultiple IdentitiesArticle in Emotion · February 2010DOI: 10.1037/a0017387 · Source: PubMedCITATIONS124READS4,2165 authors, including:Some of the authors of this publication are also working on these related projects:Audiovisual Facial Expression Analysis View projectIntervening on social cognition in psychosis spectrum View projectAmy PinkhamUniversity of Texas at Dallas123 PUBLICATIONS 3,055 CITATIONSSEE PROFILENoah SassonUniversity of Texas at Dallas78 PUBLICATIONS 2,952 CITATIONSSEE PROFILERuben GurUniversity of Pennsylvania1,086 PUBLICATIONS 57,858 CITATIONSSEE PROFILEAll content following this page was uploaded by Noah Sasson on 21 May 2014.The user has requested enhancement of the downloaded file.The Face in the Crowd Effect: Anger Superiority When UsingReal Faces and Multiple IdentitiesAmy E. Pinkham, Mark Griffin, Robert Baron,and Noah J. SassonThe University of PennsylvaniaRuben C. GurThe University of Pennsylvania and The Philadelphia VeteransAdministration Medical CenterThe “face in the crowd effect” refers to the finding that threatening or angry faces are detected moreefficiently among a crowd of distractor faces than happy or nonthreatening faces. Work establishing thiseffect has primarily utilized schematic stimuli and efforts to extend the effect to real faces have yieldedinconsistent results. The failure to consistently translate the effect from schematic to human faces raisesquestions about its ecological validity. The present study assessed the face in the crowd effect using avisual search paradigm that placed veridical faces, verified to exemplify prototypical emotional expressions, within heterogeneous crowds. Results confirmed that angry faces were found more quickly andaccurately than happy expressions in crowds of both neutral and emotional distractors. These results arethe first to extend the face in the crowd effect beyond homogenous crowds to more ecologically validconditions and thus provide compelling evidence for its legitimacy as a naturalistic phenomenon.Keywords: face in the crowd effect, emotion recognition, visual search, threat detectionSupplemental materials: http://dx.doi.org/10.1037/a0017387.suppWhen presented individually, happy facial expressions are recognized, or categorized, more accurately (Elfenbein & Ambady,2002) and more quickly than angry expressions (Leppanen, Tenhunen, & Heitanen, 2003). This may occur because greater familiarity with happy expressions in everyday life provides cognitivepriming (O¨ hman, Lundqvist, & Esteves, 2001), because greaterphysical heterogeneity among negative expressions slows recognition of these emotions (Leppanen & Hietanen, 2004), or becausehappiness can be characterized by a single salient feature (i.e., asmile) (Leppanen et al., 2003). When presented in the context ofother faces, however, the opposite appears true; angry faces areprocessed more efficiently than happy faces. This finding, knownas the “anger superiority effect” or “face in the crowd effect”(FICE), is rooted in evolutionary arguments proposing a fitnessadvantage for quickly locating, recognizing, and responding topotential environmental threats (Horstmann & Bauland, 2006;O¨ hman et al., 2001). This process has been linked to specificneural modules such as the amygdala that are specialized forprocessing faces and threat (O¨ hman & Mineka, 2001).The threat advantage is most commonly tested with visualsearch paradigms in which multiple stimuli are presented concurrently. Participants are asked to determine if all presented stimuliare from the same category or if one is different. Response times(RTs) to detect discrepant stimuli are compared to determinewhether one stimulus type is recognized quicker than another is.For facial emotion, happy, angry, and neutral faces are utilized andcomparisons are made between RTs for finding an angry face in acrowd of distractor faces relative to finding a happy face in acrowd of distractors. The FICE is supported if RTs are shorter forangry faces than for happy faces.Although some paradigms have incorporated real facial stimuli,most have employed schematic stimuli to manipulate and controlperceptual differences between emotional expressions (e.g., Fox etal., 2000; O¨ hman et al., 2001; Tipples, Atkinson, & Young, 2002).Though studies using schematic faces have consistently supportedthe FICE, they have been criticized for lacking ecological validity.Two common concerns raised about schematic stimuli include (a)schematic stimuli exaggerate facial features and do not alwaysclosely represent intended expressions (e.g., the use of a frown ordownward curved line representing the mouth in angry expressions[Horstmann & Bauland, 2006]) and (b) using schematic facesresults in homogenous crowds of distractor faces, an effect notreplicated in nature (Juth, Lundqvist, Karlsson, & O¨ hman, 2005).These criticisms question whether the reported FICE is only because of perceptual features of the stimuli rather than the emotional expression.A more fundamental criticism of schematic faces, of course, isthat they are not real faces. The evolutionary foundations of aFICE necessitate that it apply to naturally occurring environmentalstimuli and not just to controlled artificial representations (Horstmann & Bauland, 2006). Unfortunately, studies using real faces invisual search paradigms have yielded inconclusive results. Theearliest study of real faces, represented as black and white sketchlike images, was summarily discredited because of confounds inAmy E. Pinkham, Mark Griffin, Robert Baron, and Noah J. Sasson,Department of Psychiatry, The University of Pennsylvania; and Ruben C.Gur, Department of Psychiatry, The University of Pennsylvania, and ThePhiladelphia Veterans Administration Medical Center, Philadelphia, PA.Amy E. Pinkham is now at the Department of Psychology, SouthernMethodist University.Correspondence concerning this article should be addressed to Amy E.Pinkham, Department of Psychology, Southern Methodist University, P.O.Box 750442, Dallas, TX 75275-0442. E-mail: apinkham@mail.smu.eduEmotion © 2010 American Psychological Association2010, Vol. 10, No. 1, 141–146 1528-3542/10/$12.00 DOI: 10.1037/a0017387141the stimuli (Hansen & Hansen, 1988; Purcell, Stewart, & Skov,1996), and subsequent studies that have used photos of faces haveproduced equivocal results. Different studies have reported advantages for happy expressions (Juth et al., 2005), for angry expressions (Fox & Damjanovic, 2006; Gilboa-Schechtman, Foa, &Amir, 1999; Horstmann & Bauland, 2006), and for angry facesinconsistently across experimental conditions (Williams, Moss,Bradshaw, & Mattingley, 2005).Although several of these studies supported and advanced ourunderstanding of the FICE, they continue to suffer from shortcomings. First, every study reporting a FICE with real faces used onlya single identity in each display. This strategy affords control overpotential perceptual confounds but compromises the ecologicalvalidity of a heterogeneous crowd and may introduce confoundsgiven that the degree of similarity between distractors influencesvisual search performance (Duncan & Humphrey, 1989). Second,other studies (Horstmann & Bauland, 2006; Williams et al., 2005)informed participants which emotional expression constituted thediscrepant stimulus before search. This approach concedes ecological validity by introducing artificial priming that does not oftenoccur in real-world settings.We report a more ecologically valid test of the FICE. Weimplemented a visual search paradigm that maximized ecologicalvalidity by using photos of validated veridical emotional expressions, and by incorporating multiple identities to create a morerealistic “crowd.” Both male and female faces were included tomaximize heterogeneity and no individuals were represented morethan once in a display. Finally, we implemented a mixed designthat included every combination of target and distractor. Thisstrategy enhanced ecological validity by allowing any individualface with any expression to potentially serve as the target.We predicted that angry faces would be found more quickly andaccurately than happy faces among crowds of distractor faces. Weexpected this to be true within the contexts of both the classicalsearch-asymmetry design (i.e., one angry face in a crowd of happyfaces vs. one happy face in a crowd of angry faces) and theconstant distractor paradigm (i.e., angry in a crowd of neutral vs.happy in a crowd of neutral) (Horstmann & Bauland, 2006).MethodsPilot Study: Stimuli Selection and ValidationAn initial investigation was conducted to select and validatefacial stimuli included in the subsequent visual search paradigm.Our goal was to obtain happy, angry, and neutral expressions ofnine different individuals that were both accurately recognized andcharacteristic representations of each target emotion.To assess recognition accuracy, 150 undergraduates (81 female)from the University of Pennsylvania provided forced-choice emotion ratings (happy, angry, or neutral) on 80 photos of 23 uniqueindividuals (12 female). The students received extra credit towardcourse requirements for participating. Photos were chosen from alarger database of images acquired during facial displays of emotion (Gur et al., 2002) and were processed with Adobe Photoshopto limit individual images to the head and neck and to replacebackground features with a black background. Each expression(happy, angry, and neutral) was represented for each individual.Participants categorized all photos, which were randomly presented on a computer screen via Adobe Flash 8.0, as happy, angry,or neutral.Ratings were then assessed to determine which of the 23 individuals provided the most accurately recognized photos across thethree expressions, and the top 9 individuals were selected forinclusion in the visual search task. Across the task, recognitionaccuracy for happy, angry, and neutral expressions was 98, 97, and93.4%, respectively. For the subset of images selected for thevisual search paradigm, recognition accuracy for happy, angry, andneutral expressions was 98.7, 97.5, and 95.7%, respectively. Thefinal stimulus set thus contained 27 photos of 9 individuals (5female) each displaying happy, angry, and neutral expressions.Next, we sought to validate each selected emotional expressionwith the Facial Action Coding System (FACS) developed byEkman and Friesen (1978) that identifies the presence of specificactions of facial muscles called Action Units (AUs). Kohler andcolleagues (2004) reported that happy expressions are uniquelycharacterized by upward turned lip corners (AU 12) and raisedcheeks (AU 6) and that the presence of these AUs is positivelyassociated with accurate recognition. Similarly, turned lower lips(AU 16) exposing teeth and a wrinkled nose (AU 9) are unique toexpressions of anger, and lowered eyebrows (AU 4) and turnedlower lips (AU 16) are most closely associated with accurateidentification. Therefore, we required that selected happy andangry expressions each exhibit at least one uniquely characteristicAU in combination with at least one AU associated with accurateidentification. Thus, we expected all happy expressions to haveboth AUs 6 and 12, and we expected angry expressions to havetwo of the three following AUs: 4, 9, and 16.Two certified FACS raters assessed the happy and angry expressions for each of the 9 individuals. The neutral image of eachindividual served as a baseline comparison. FACS scoring wasperformed independently by each rater and followed with a consensus conference in which any disagreements were discussed andrerated to achieve agreement. Consistent with Kohler et al. (2004),emphasis was placed on the presence of each AU, and intensitywas not scored.All included happy and angry expressions met our inclusionrequirements providing verification that chosen stimuli are representative of the target emotions (see online supplemental materialsfor AU ratings).ParticipantsTwenty-six undergraduates (13 female) from the University ofPennsylvania participated. All were right-handed, ranging in agefrom 18 to 22 years with a mean of 19.46 (SD .95) years. Eachprovided informed consent and received course credit in exchangefor participating.Stimuli and ApparatusAs detailed above, stimuli were 27 well-validated photos of nineindividuals showing angry, happy, and neutral facial expressions.Each photo was 5.3 cm (width) 5.3 cm (height). To limitperceptual differences across faces and expressions, all faces werepresented in grayscale on black backgrounds, and mean luminanceand contrast were matched between the three different photos ofeach individual. For each trial, nine images were presented simul-142 BRIEF REPORTStaneously in a 3 3 matrix measuring 15.9 cm (width) 15.9 cm(height), and all matrices were viewed at a distance of 60 cm.Examples of stimulus matrices are presented in Figure 1.Stimulus presentation and data collection were conducted withan IBM Thinkpad Lenovo T60 laptop computer with a 2.16 GHzprocessor and 15.4-inch monitor. A refresh rate of 60 MHz and aresolution of 1440 900 pixels were used. Presentation version12.1 software (http://www.neurobs.com) delivered stimuli and recorded responses and RTs. Responses were made on the keyboardin which three keys were activated, one to indicate that all displayed stimuli were showing the same expression, one that adiscrepant expression was present, and one to advance to the nexttrial.DesignThe task consisted of 162 matrices presented in a random order.One-third (54) of matrices were target-absent trials composed offaces showing the same expression (i.e., all happy, all angry, or allneutral), and the remaining two-thirds (108) were target trialscomposed of 8 faces showing one expression (e.g., happy) and 1target face showing a different expression (e.g., angry). In targettrials, all combinations of distractors and targets were utilizedresulting in the following six different target-distractor combinations: 1 happy, 8 neutral; 1 happy, 8 angry; 1 angry, 8 neutral; 1angry, 8 happy; 1 neutral, 8 happy; and 1 neutral, 8 angry. Twoimages of the same individual were never included in a matrix.Photo positions within each matrix were randomly assigned underthe constraint that target photos had to appear in each position ofthe matrix twice, and the order of trials was random across participants. Dependent variables were RTs and accuracy.ProcedureParticipants were tested individually in a darkened room, and atrained experimenter reviewed onscreen instructions. Participantswere informed that they would see a series of matrices consistingof several faces expressing happy, angry, or neutral expressionsand that their job was to press the ‘S’ key on the keyboard if allFigure 1. Example stimulus matrices from the visual search task. Clockwise from the top left: an angry faceamong neutral faces, a happy face among neutral faces, a happy face among angry faces, and an angry faceamong happy faces. For demonstration purposes, individuals are shown within the same position of each matrix.BRIEF REPORTS 143faces showed the same expression or the ‘L’ key if one faceshowed an expression differing from the others.Each trial began with the presentation of a white fixation crossdisplayed in the middle of the screen for 500 ms before beingreplaced by the stimulus matrix. Matrices then remained onscreenuntil the participant responded or until 2,000 ms had elapsed.Participants were instructed they could respond after the matrixdisappeared but that they should try to respond while it was stillshowing. Following participant responses, the word “Next” appeared on the screen indicating that they may proceed to the nexttrial. When ready, participants pressed the space bar to begin thenext trial. This resulted in a variable intertrial interval that allowedparticipants to move through the task at their own pace.Before the experimental task, all participants completed 18practice trials using schematic faces as stimuli. These practicetrials were implemented to familiarize participants with task design and stimulus display timing. In both the practice and experimental tasks, no feedback was provided.Statistical AnalysisThe hypothesis that angry targets would be more quickly andaccurately identified than happy targets was tested in separate 2(target type: angry vs. happy) 2 (distractor type: neutral vs.emotional) repeated measures ANOVAs, one for RT and one foraccuracy. Only correct responses were included in the RT analysis,and RT outliers, defined as 2 SDs from the individual’s mean(3.1% of all trials), were excluded. Follow-up paired t tests wereconducted to assess the FICE both within the constant distractorparadigm and within the classical search asymmetry design. Ourhypotheses stipulated specific directional effects (anger happyfor RT and anger happy for accuracy), permitting one-tailedtests for these follow-up comparisons. Because all combinations oftargets and distractors were utilized, we conducted paired t tests toexamine the influence of emotional distractors on RT and accuracyfor trials in which neutral faces were the targets. Finally, RT andaccuracy for target-absent trials were analyzed in a one-way (trialtype: angry vs. happy vs. neutral) ANOVA. Where Mauchly’s testindicated that the assumption of sphericity had been violated,Greenhouse-Geisser corrections were utilized.ResultsConsistent with prediction, the analysis of RT revealed a significant main effect of target type indicating that angry targetswere detected more quickly (1,698 ms) than happy targets (1,841ms; F(1, 25) 27.65, p .01, p 2 .53). A significant main effectof distractor type indicated that targets were found more quicklyamong neutral distractors (1,613 ms) than among emotional ones(1,926 ms; F(1, 25) 64.26, p .01, p 2 .72). The interactionbetween target and distractor type was also significant, demonstrating that the effect of distractor type was less pronounced whenangry faces were targets as compared to when happy expressionswere targets (Figure 2; F(1, 25) 15.67, p .01, p 2 .39).Similar results were evident for accuracy. Angry targets werefound with greater accuracy (84.9%) than happy targets (74.4%;F(1, 25) 21.65, p .01, p 2 .46), and participants respondedmore accurately when neutral stimuli were used as distractors(90%) as compared to emotional stimuli (69.4%; F(1, 25) 101.52, p .01, p 2 .80). The interaction between target anddistractor type was again significant revealing that the effect ofdistractor on accuracy was smaller when angry stimuli were targets(Figure 2; F(1, 25) 15.76, p .01, p 2 .39).Follow-up analyses directly comparing RT and accuracy forangry faces in neutral crowds to happy faces in neutral crowdsdemonstrated that when distractor type remained constant acrossconditions, angry targets were found more quickly (1,584 ms) andaccurately (92.3%) than happy targets (1,642 ms, 87.8%; t(25) 1.83, p .04, d .36 for RT and t(25) 1.96, p .02, d .38for accuracy). Likewise, in accord with the search asymmetrydesign, comparing RT and accuracy for finding angry faces inhappy crowds to finding happy faces in angry crowds revealed thatangry targets were again identified more quickly (1,813 ms) andaccurately (77.6%) than happy targets (2,040 ms, 61.1%; t(25) 6.14, p .01, d 1.20 for RT and t(25) 5.35, p .01, d 1.05for accuracy).Additional paired t tests conducted to assess potential differences in accuracy and RT between trials comprised of neutraltargets in either happy or angry crowds revealed no difference inaccuracy between these two conditions (80.8% and 81.6%, respectively; t(25) .32, p .75, d .06) but a significant differencein RT, t(25) 2.18, p .04, d .43. Neutral targets in happyFigure 2. Mean response time (RT) for correct responses (left) and mean accuracy (right) for visual searchperformance on target-present trials when targets were emotional expressions. Vertical bars indicate SE.144 BRIEF REPORTScrowds were found significantly faster than neutral targets in angrycrowds (1,759 and 1,863 ms, respectively).For target-absent trials, significant main effects were found fortrial type on RT (F(1.59,39.69) 56.43, p .01, p 2 .69) andaccuracy (F(2, 24) 11.75, p .01, p 2 .49). Follow-up testsdemonstrated that RTs were fastest for all neutral trials (1,899 ms)followed by all happy (2,128 ms) and then all angry trials (2,375ms) and that RTs for each trial type all significantly differed fromeach other ( p .01 for all comparisons and d 1.51 for H vs. A,.94 for H vs. N, and 1.78 for A vs. N). Regarding accuracy,participants were less accurate on all angry trials (82.5%) ascompared to all happy (94.2%, p .01, d .91) or all neutraltrials (92.7%, p .01, d .84), which did not differ from eachother ( p .48, d .14).DiscussionThe primary purpose of the present study was to assess the FICEin a visual search paradigm that maximized ecological validity byutilizing veridical faces and heterogeneous crowds. Our resultssuggest that angry faces were consistently found more quickly andaccurately in a crowd of distractor faces than were happy faces.These results were evident in both the constant distractor paradigmin which only neutral faces are used in the crowd and in thetraditional search asymmetry design in which search performanceis compared between angry faces in happy crowds and happy facesin angry crowds.The results reported here are largely consistent with those observed using well-controlled schematic stimuli. The current investigation, however, avoids the three main criticisms of schematicstimuli. First, the impoverished nature of schematic faces oftenresults in stimuli that can only approximate the intended expression. We addressed this limitation by employing photos of realexpressions and by validating each expression before inclusion inthe visual search task with FACS. This procedure assured that theutilized facial stimuli accurately represented the target expressionand included the features most highly linked to correct identification.Second, the use of schematic stimuli also results in crowds ofidentical clones. In addition to sacrificing a realistic “crowd”effect, the use of identical distractors may artificially inflate theanger advantage because search difficulty decreases with increasedsimilarly between distractors (Duncan & Humphreys, 1989). Thepresent paradigm ameliorates both problems by incorporating distinct individuals as stimuli.Finally, schematic stimuli have received criticism for lackingecological validity. For the FICE to reflect a real-world phenomenon, it must also be evident using naturalistic stimuli. To ourknowledge, the present study is the first to find a FICE underconditions that maximize ecological validity by incorporating real,validated facial expressions within a crowd of nonrepeating identities. These results are critical to the proposed evolutionary mechanism of the FICE, as they demonstrate the effect in a paradigmthat more closely represents naturalistic interactions.Additionally, the present study may inform our understanding offactors contributing to the FICE. The finding of a FICE in theconstant distractor paradigm provides compelling support for themore efficient identification of angry expressions relative to happyexpressions within a crowd of faces. This comparison betweenangry and happy expressions in identical distractor crowds iscritical for demonstrating that the effect is not driven simply byfaster search through happy distractors, as could be concludedfrom the search asymmetry design. Similarly, because participantsresponded slower and less accurately on nontarget trials composedof all angry faces and target trials in which angry faces were usedas distractors, angry expressions may automatically attract attention and hamper attentional shifts to other stimuli. This interpretation is consistent with Fox and colleagues (2000), who suggestthat the FICE is driven both by enhanced detection of angry facesas well as attentional capture by angry expressions that results inslowed search through angry crowds relative to happy or neutralcrowds.Finally, the FICE presents an interesting contrast to recognitiontime and accuracy when emotional faces are presented alone.Although slower responses to angry faces in isolation may reflectattentional capture, this contrast may also reflect disparities in taskdemands suggesting that a different type of sensory and cognitiveprocessing is engaged when simultaneously assessing multiplesalient stimuli. Specifically, providing semantic labels for emotional expressions requires engagement of frontal cortical areasthat modulate and suppress the amygdala activity thought to drivethe FICE (Hariri, Bookheimer, & Mazziotta, 2000; O¨ hman &Mineka, 2001). Thus, the FICE likely reflects automatic, or implicit, appraisal whereas emotion labeling requires explicit processing.Although this study extends the existing literature on the FICE,some limitations must be considered. First, we did not vary matrixsize and therefore cannot address whether angry faces are detectedby parallel or serial search. Future investigations that manipulatematrix size or incorporate eye tracking may help address thisquestion. Second, only whole faces were utilized, which leavesopen the possibility that specific facial features (i.e., the mouth oreyes) may have disproportionately contributed to the effect shownhere. Future studies that show features in isolation will likely beinformative. Similarly, emotional intensity was not controlled, andit is possible that the angry expressions used here were moreintense than the happy expressions. Intense angry expressions (i.e.,anger with exposed teeth) may be less common in everydayinteractions, and given that familiarity, particularly of distractors,has been shown to influence visual search (Malinowski & Hubner,2001; Shen & Reingold, 2001; Wang, Cavanagh, & Green, 1994),both familiarity and emotional intensity should be considered infuture work. Finally, the presentation of stimuli in grayscale mayhave somewhat limited ecological validity, but this approach wasnecessary to match visual properties of the stimuli. These limitations notwithstanding, the present study offers an ecologicallyvalid demonstration of the FICE indicating a strong advantage forprocessing threatening, relative to nonthreatening, environmentalstimuli.AcknowledgmentsThe authors thank Christian Kohler, MD, and Kristin Healey fortheir assistance with performing FACS ratings and Paul Grant,PhD, for his assistance with subject recruitment.BRIEF REPORTS 145ReferencesDuncan, J., & Humphreys, G. W. (1989). Visual search and stimulussimilarity. Psychological Review, 96, 433–458.Ekman, P., & Friesen, W. V. (1978). Manual of the Facial Action CodingSystem (FACS). Palo Alto, CA: Consulting Pscyhologists Press.Elfenbein, H. A., & Ambady, N. (2002). On the universality and culturalspecificity of emotion recognition: A meta-analysis. Psychological Bulletin, 128, 203–235.Fox, E., & Damjanovic, L. (2006). The eyes are sufficient to produce athreat superiority effect. Emotion, 6, 534–539.Fox, E., Lester, V., Russo, R., Bowles, R. J., Pichler, A., & Dutton, K.(2000). Facial expressions of emotion: Are angry faces detected moreefficiently? Cognition & Emotion, 14, 61–92.Gilboa-Schechtman, E., Foa, E. B., & Amir, N. (1999). Attentional biasesfor facial expression in social phobia: The Face-in-the-Crowd paradigm.Cognition & Emotion, 13, 305–318.Gur, R. C., Sara, R., Hagendoorn, M., Marom, O., Hughett, P., Macy, L.,& Gur, R. E. (2002). A method for obtaining 3-dimensional facialexpressions and its standardization for use in neurocognitive studies.Journal of Neuroscience Methods, 115, 137–143.Hansen, C. H., & Hansen, R. D. (1988). Finding the face in the crowd: Ananger superiority effect. Journal of Personality and Social Psychology,54, 917–924.Hariri, A. R., Bookheimer, S. Y., & Mazziotta, J. C. (2000). Modulatingemotional responses: Effects of a neocortical network on the limbicsystem. Neuroreport, 11, 43–48.Horstmann, G., & Bauland, A. (2006). Search asymmetries with real faces:Testing the anger-superiority effect. Emotion, 6, 193–207.Juth, P., Lundqvist, D., Karlsson, A., & Ohman, A. (2005). Looking forfoes and friends: Perceptual and emotional factors when finding a facein the crowd. Emotion, 5, 379–395.Kohler, C. G., Turner, T., Stolar, N. M., Bilker, W. B., Brensinger, C. M.,Gur, R. E., et al. (2004). Differences in facial expressions of fouruniversal emotions. Psychiatry Research, 128, 235–244.Leppanen, J. M., & Hietanen, J. K. (2004). Positive facial expressions arerecognized faster than negative facial expressions, but why? Psychological Research, 69, 22–29.Leppanen, J. M., Tenhunen, M., & Heitanen, J. K. (2003). Faster choicereaction times to positive than to negative facial expressions: The role ofcognitive and motor processes. Journal of Psychophysiology, 17, 113–123.Malinowski, P., & Hubner, R. (2001). The effect of familiarity on visualsearch performance: Evidence for learned basic features. Perception andPsychophysics, 63, 458–463.O¨ hman, A., Lundqvist, D., & Esteves, F. (2001). The face in the crowdrevisited: A threat advantage with schematic stimuli. Journal of Personality and Social Psychology, 80, 381–396.O¨ hman, A., & Mineka, S. (2001). Fears, phobias, and preparedness:Toward an evolved module of fear and fear learning. PsychologicalReview, 108, 483–522.Purcell, D. G., Stewart, A. L., & Skov, R. B. (1996). It takes a confoundedface to pop out of a crowd. Perception, 25, 1091–1108.Shen, J., & Reingold, E. M. (2001). Visual search asymmetry: The influence of stimulus familiarity and low-level features. Perception andPsychophysics, 63, 464–475.Tipples, J., Atkinson, A. P., & Young, A. W. (2002). The eyebrow frown:A salient social signal. Emotion, 2, 288–296.Wang, Q., Cavanagh, P., & Green, M. (1994). Familiarity and pop-out invisual search. Perception and Psychophysics, 56, 495–500.Williams, M. A., Moss, S. A., Bradshaw, J. L., & Mattingley, J. B. (2005).Look at me, I’m smiling: Visual search for threatening and nonthreatening facial expressions. Visual Cognition, 12, 29–50.Received December 14, 2008Revision received August 18, 2009Accepted August 18, 2009 146 BRIEF REPORTSView publication stats

QUALITY: 100% ORIGINAL PAPER – NO PLAGIARISM – CUSTOM PAPER

Leave a Reply

Your email address will not be published. Required fields are marked *