+ All documents
Home > Documents > Emotion modulates language production during covert picture naming

Emotion modulates language production during covert picture naming

Date post: 20-Nov-2023
Category:
Upload: ucm
View: 2 times
Download: 0 times
Share this document with a friend
10
Neuropsychologia 48 (2010) 1725–1734 Contents lists available at ScienceDirect Neuropsychologia journal homepage: www.elsevier.com/locate/neuropsychologia Emotion modulates language production during covert picture naming José A. Hinojosa a,, Constantino Méndez-Bértolo a , Luis Carretié b , Miguel A. Pozo a a Instituto Pluridisciplinar, Universidad Complutense de Madrid, 28040 Madrid, Spain b Departamento de Psicología Biológica y de la Salud, Universidad Autónoma de Madrid, Madrid, Spain article info Article history: Received 27 August 2009 Received in revised form 19 January 2010 Accepted 17 February 2010 Available online 24 February 2010 Keywords: Emotion Language production Phonological encoding Grapheme monitoring ERPs abstract Previous studies have shown that emotional content modulates the activity of several components of the event-related potentials during word comprehension. However, little is known about the impact of affective information on the different processing stages involved in word production. In the present study we aimed to investigate the influence of positive and negative emotions in phonological encoding, a process that have been shown to take place between 300 and 450 ms in previous studies. Participants performed letter searching in a picture naming task. It was found that grapheme monitoring in positive and negative picture names was associated with slower reaction times and enhanced amplitudes of a positive component around 400 ms as compared to monitoring letters in neutral picture names. We propose that this modulation reflects a disruption in phonological encoding processes as a consequence of the capture of attention by affective content. Grapheme monitoring in positive picture names also elicited higher amplitudes than letter searching in neutral image names in a positive component around 100 ms. This amplitude enhancement might be interpreted as a manifestation of the ‘positive offset’ during conceptual preparation processes. The results of a control experiment with a passive viewing task showed that both effects cannot be simply attributed to the processing of the emotional images per se. Overall, it seems that emotion modulates word production at several processing stages. © 2010 Elsevier Ltd. All rights reserved. 1. Introduction Most of the research on emotion has focused on how the brain processes the affective content of pictorial stimuli (e.g., Carretié et al., 2009; Codispoti, Ferrari, & Bradley, 2007; Delplanque, Silvert, Hot, Rigoulot, & Sequeira, 2006; Schupp, Junghöfer, Weike, & Hamm, 2004; Smith, Cacioppo, Larsen, & Chartrand, 2003) or faces (e.g., Pourtois, Dan, Grandjean, Sander, & Vuilleumier, 2005; Schacht & Sommer, 2009a; Vuilleumier & Pourtois, 2007). More recently, researchers’ attention have been directed to study the impact of emotional content in a different set of stimuli that are perceptually simple and highly symbolic that is, linguistic stim- uli. A number of studies have tried to elucidate the temporal course and the brain areas implicated in the processing of affective information during single word comprehension (Hinojosa, Carretié, Valcárcel, Méndez-Bértolo, & Pozo, 2009; Kissler, Herbert, Winkler, & Junghofer, 2009; Naumann, Maier, Diedrich, Becker, & Bartussek, 1997; Scott, O’Donnell, Leuthold, & Sereno, 2009). The results of these studies have shown that the emotional content modulates the amplitude of an early posterior negativity generated in extrastriate cortices. This is thought to reflect rudimentary semantic stimulus Corresponding author. E-mail address: [email protected] (J.A. Hinojosa). classification that is sensitive to attention modulations (Herbert, Junghofer, & Kissler, 2008; Kissler, Herbert, Peyk, & Junghofer, 2007; Schacht & Sommer, 2009b). Also, the amplitude of a late positivity is enhanced for emotional arousing words, which has been related to the allocation of additional attentional resources for an efficient memory encoding of affective features (Dillon, Cooper, Grent-t‘- Jong, Woldorff, & La Bar, 2006; Herbert, Kissler, Junghofer, Peyk, & Rockstroh, 2006; Hinojosa, Carretié, Méndez-Bértolo, Míguez, & Pozo, 2009). Even though the influence of emotional content in word comprehension has been well established, little is known about the effects of affective information in language produc- tion. The current investigation addresses this question employing the high temporal resolution of the event-related brain potentials (ERPs). Based on speech error evidence, the most prevalent theoreti- cal view of language production assumes that to produce a word, a speaker will first activate an appropriate lexical concept. Lexical concepts are conceived as nodes in a semantic network, so there is always some activation spreading from the target concept to semantically related concepts. As a consequence, the correspond- ing lexical item (lemma) in the mental lexicon, which is an abstract description of the syntactic properties of the item, is activated during ‘lexical selection’. Lexical selection is achieved through a process of competitive, spreading activation of both the targets and the related nontargets nodes, with the node with the highest activa- tion being selected. The next stage is called ‘phonological encoding’, 0028-3932/$ – see front matter © 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.neuropsychologia.2010.02.020
Transcript

E

Ja

b

a

ARRAA

KELPGE

1

bCS&fSripuciV&1tac

0d

Neuropsychologia 48 (2010) 1725–1734

Contents lists available at ScienceDirect

Neuropsychologia

journa l homepage: www.e lsev ier .com/ locate /neuropsychologia

motion modulates language production during covert picture naming

osé A. Hinojosaa,∗, Constantino Méndez-Bértoloa, Luis Carretiéb, Miguel A. Pozoa

Instituto Pluridisciplinar, Universidad Complutense de Madrid, 28040 Madrid, SpainDepartamento de Psicología Biológica y de la Salud, Universidad Autónoma de Madrid, Madrid, Spain

r t i c l e i n f o

rticle history:eceived 27 August 2009eceived in revised form 19 January 2010ccepted 17 February 2010vailable online 24 February 2010

eywords:motionanguage production

a b s t r a c t

Previous studies have shown that emotional content modulates the activity of several components ofthe event-related potentials during word comprehension. However, little is known about the impactof affective information on the different processing stages involved in word production. In the presentstudy we aimed to investigate the influence of positive and negative emotions in phonological encoding,a process that have been shown to take place between 300 and 450 ms in previous studies. Participantsperformed letter searching in a picture naming task. It was found that grapheme monitoring in positiveand negative picture names was associated with slower reaction times and enhanced amplitudes of apositive component around 400 ms as compared to monitoring letters in neutral picture names. We

honological encodingrapheme monitoringRPs

propose that this modulation reflects a disruption in phonological encoding processes as a consequenceof the capture of attention by affective content. Grapheme monitoring in positive picture names alsoelicited higher amplitudes than letter searching in neutral image names in a positive component around100 ms. This amplitude enhancement might be interpreted as a manifestation of the ‘positive offset’during conceptual preparation processes. The results of a control experiment with a passive viewing task

cannotion

showed that both effectsOverall, it seems that em

. Introduction

Most of the research on emotion has focused on how therain processes the affective content of pictorial stimuli (e.g.,arretié et al., 2009; Codispoti, Ferrari, & Bradley, 2007; Delplanque,ilvert, Hot, Rigoulot, & Sequeira, 2006; Schupp, Junghöfer, Weike,

Hamm, 2004; Smith, Cacioppo, Larsen, & Chartrand, 2003) oraces (e.g., Pourtois, Dan, Grandjean, Sander, & Vuilleumier, 2005;chacht & Sommer, 2009a; Vuilleumier & Pourtois, 2007). Moreecently, researchers’ attention have been directed to study thempact of emotional content in a different set of stimuli that areerceptually simple and highly symbolic that is, linguistic stim-li. A number of studies have tried to elucidate the temporalourse and the brain areas implicated in the processing of affectivenformation during single word comprehension (Hinojosa, Carretié,alcárcel, Méndez-Bértolo, & Pozo, 2009; Kissler, Herbert, Winkler,Junghofer, 2009; Naumann, Maier, Diedrich, Becker, & Bartussek,

997; Scott, O’Donnell, Leuthold, & Sereno, 2009). The results ofhese studies have shown that the emotional content modulates themplitude of an early posterior negativity generated in extrastriateortices. This is thought to reflect rudimentary semantic stimulus

∗ Corresponding author.E-mail address: [email protected] (J.A. Hinojosa).

028-3932/$ – see front matter © 2010 Elsevier Ltd. All rights reserved.oi:10.1016/j.neuropsychologia.2010.02.020

ot be simply attributed to the processing of the emotional images per se.modulates word production at several processing stages.

© 2010 Elsevier Ltd. All rights reserved.

classification that is sensitive to attention modulations (Herbert,Junghofer, & Kissler, 2008; Kissler, Herbert, Peyk, & Junghofer, 2007;Schacht & Sommer, 2009b). Also, the amplitude of a late positivityis enhanced for emotional arousing words, which has been relatedto the allocation of additional attentional resources for an efficientmemory encoding of affective features (Dillon, Cooper, Grent-t‘-Jong, Woldorff, & La Bar, 2006; Herbert, Kissler, Junghofer, Peyk,& Rockstroh, 2006; Hinojosa, Carretié, Méndez-Bértolo, Míguez, &Pozo, 2009). Even though the influence of emotional content inword comprehension has been well established, little is knownabout the effects of affective information in language produc-tion. The current investigation addresses this question employingthe high temporal resolution of the event-related brain potentials(ERPs).

Based on speech error evidence, the most prevalent theoreti-cal view of language production assumes that to produce a word,a speaker will first activate an appropriate lexical concept. Lexicalconcepts are conceived as nodes in a semantic network, so thereis always some activation spreading from the target concept tosemantically related concepts. As a consequence, the correspond-ing lexical item (lemma) in the mental lexicon, which is an abstract

description of the syntactic properties of the item, is activatedduring ‘lexical selection’. Lexical selection is achieved through aprocess of competitive, spreading activation of both the targets andthe related nontargets nodes, with the node with the highest activa-tion being selected. The next stage is called ‘phonological encoding’,

1 sychol

apsattoaDmrbtsb1sleht(impKHA&

ttao2GutiLFSd(acfibiatlC&iH2qHKtHHtt

726 J.A. Hinojosa et al. / Neurop

nd involves the retrieval of word form properties. Two kinds ofhonological information become available. The first one is word’segmental composition, roughly the individual phonemes of a wordnd their ordering (during segmental spell out). The second one ishe word’s metrical structure; this is the number of syllables andhe word’s stress pattern over these syllables (during metrical spellut). In a process called segment-to-frame association segmentalnd metrical information is combined into a phonological word.uring phonological word formation the previously retrieved seg-ents are syllabified according to universal and language-specific

ules. The resulting phonological syllables activate phonetic sylla-les in a so-called mental syllabary that are used by the speakero prepare the articulatory gestures for words in a final processingtage (Levelt, 1993; Levelt, 2001; Levelt, Roelofs, & Meyer, 1999;ut see for instance Dell, Schwartz, Martin, Saffran, & Gagnon,997 for alternative proposals). The model predicted that the twoystem’s architecture is serial, so lexical selection precedes phono-ogical form encoding. This assumption has been tested in severallectrophysiological studies using a variety of go/no go tasks thatave confirmed the serial nature of word production. Basically,hese studies found effects in the Lateralized Readiness potentialrelated to response preparation) and the N200 (related to responsenhibition) that suggested that grammatical and semantic infor-

ation processing during lexical selection precedes phonologicalrocessing between 40 and 170 ms (Rodriguez-Fornells, Schmitt,utas, & Münte, 2002; Smith, Münte, & Kutas, 2000; Van Turennout,agoort, & Brown, 1997, 1998; Zhang & Damian, 2009; but seebdel Rahman & Sommer, 2003; Abdel Rahman, van Turennout,Levelt, 2003).Evidence about the mechanisms involved in language produc-

ion mainly comes from the use of the picture naming task. Inhis paradigm, participants are forced to retrieve the name ofn object displayed in a picture and/or to monitor the presencef a linguistic unit (Howard, Nickels, Coltheart, & Cole-Virtue,006; Indefrey & Levelt, 2004; Roelofs, 2008; Schuhmann, Schiller,oebel, & Sack, 2009). Picture naming has been proved to beseful in order to characterize the stages involved in word produc-ion, even when participants were instructed to perform silentlynstead of overtly naming the picture (Eulitz, Hauk, & Cohen, 2000;evelt, Praamstra, Meyer, Helenius, & Salmelin, 1998; Rodriguez-ornells et al., 2002; Salmelin, Hari, Lounasmaa, & Sams, 1994;mith, Schiltz, Zaake, Kutas, & Münte, 2001). On the basis ofata obtained in picture naming studies, Indefrey and Levelt2004) delineated the time course of word production in a meta-nalysis that included 82 studies. These authors estimated thatonceptual representations are accessed within the first 175 ms,ollowed by lexical access (175–250 ms) and phonological encod-ng (250–450 ms). Finally, articulatory preparation would occuretween 450 and 600 ms. Up to date, research on picture nam-

ng has shown that some variables influence language productiont several processing stages, including the age of acquisition ofhe names (early acquired picture names are produced faster thanater acquired picture names; Bonin, Chalard, Méot, & Barry, 2006;atling & Johnston, 2006; Morrison & Ellis, 2000; Morrison, Ellis,Quinlan, 1992), familiarity (better performance in naming famil-

ar than unfamiliar picture names; Lambon Ralph, Graham, Ellis, &odges, 1998; Meltzer, Postman-Caucheteux, McArdie, & Braun,009), word frequency (shorter naming latencies for high fre-uency as compared to low frequency words; Dent, Johnston, &umphreys, 2008; Graves, Grabowski, Mehta, & Gordon, 2007;avé, Samuel-Enoch, & Adiv, 2009), or word length (naming reac-

ion times increase as picture names get longer; Okada, Smith,umphries, & Hickok, 2003; Wilson, Isenberg, & Hickok, 2009).owever, to our knowledge no previous study has attempted

o determine whether affective content modulates word produc-ion.

ogia 48 (2010) 1725–1734

With this purpose, the current study evaluates grapheme moni-toring in a picture naming task that presented pictures of emotionalobjects. This paradigm is a variation of the phoneme-monitoringtask (Wheeldon & Levelt, 1995). A target grapheme is presentedbefore the picture and participants have to indicate whether thistarget is present or absence in the picture name. The graphememonitoring task is suitable to study phonological encoding pro-cesses (Hauk, Rockstroh, & Eulitz, 2001) because of the closerelationship that exists between graphemic and phonemic codesfor words (see Wheeldon & Levelt, 1995, and Indefrey & Levelt,2004 for a discussion on this issue). Also, this task has been used inprevious ERP research on language production. In this regard, Hauket al. (2001) found that grapheme monitoring in picture naming isassociated with a positive component in the time interval between300 and 450 ms. Consistent with the proposal made by Indefrey andLevelt (2004), this component was thought to reflect the final stagesof phonological encoding and the transition to silent articulation(Hauk et al., 2001). This assertion has received additional supportfrom the results of a MEG study that used a similar task in whichparticipants had to monitor phonological information by decidingwhether the name of the object started with a vowel (Vihla, Laine,& Salmelin, 2006). It was found that phonological encoding takesplace after 300 ms, as reflected in the sustained activation of theposterior temporal and inferior frontal cortices. Also, using elec-trodes implanted in language-related areas of epileptic patients,Sahin, Pinker, Cash, Schomer, & Halgren (2009) reported phono-logical processing to occur around 450 ms when subjects producedgrammatically inflected words. Finally, ERP abnormalities in asimilar time window (300–450 ms) have been found for aphasicindividuals with phonological impairments but not for those withsemantic or lexical deficits (Laganaro, Morand, & Schnider, 2009;Laganaro, Morand, Schwitter, et al., 2009).

Due to the lack of studies on affective processing in word pro-duction, the hypotheses that might be outlined are tentative. Inlanguage comprehension research, the emotional content of thewords has shown to be able to capture attention and disrupt anongoing task, especially during the processing of negative intensewords. This modulation was reflected in delayed reaction times(RTs) and enhanced amplitudes of several brain waves (Carretié etal., 2008; Kuchinke et al., 2005; Larsen, Mercer, Balota, & Strube,2008; McKay et al., 2004; Pratto & John, 1991). If this findinggeneralizes to language production, in particular to phonologi-cal encoding, the grapheme monitoring of picture names withemotional content should enhance the amplitude of a positive com-ponent as compared to neutral words in the time range between300 and 450 ms. Grapheme monitoring of emotional picture namesshould also elicit longer RTs than in neutral words. Alternatively,the absence of amplitude and RTs differences in this time windowwould indicate that affective information exerts no influence inphonological encoding during word production.

2. Methods

2.1. Participants

Thirty four native Spanish speakers (30 females; 20–33 years, mean 22 years;lateralization quotient 50–100%, mean 78% measured by the Edinburgh Handed-ness Scale; Oldfield, 1971). All participants reported normal or corrected-to-normalvision. They gave their informed consent to participate in the study.

2.2. Stimuli

The stimuli were selected from a set of 107 pictures taken from the IAPs database

(Lang, Bradley, & Cuthbert, 2001). These pictures included negative, positive andneutral stimuli. In a pre-test, 19 subjects (different from those who participatedin the ERP recording) named all the images and rated the names they gave in a9-points Lykert scale in the dimensions of arousal and valence (being 9 very acti-vating and very positive, respectively). Only pictures in which at least 90% of thesubjects used the same name were further considered. An equal number of pictures

J.A. Hinojosa et al. / Neuropsychologia 48 (2010) 1725–1734 1727

Table 1Means of valence (1, negative to 9, positive) and arousal (1, calming to 9, arousing) IAPs ratings for every pictures type. The assessments given by the independent sampleof subjects to picture names are also shown (more details are in the main text). Finally, the word length (number of syllables) and word frequency are provided. Last rowshows the results of the statistical analyses concerning each of these variables.

Valence Arousal Frequency (per 2 million) Length

PicturesNegative 3.3 5.34 – –Positive 7.26 5.24 – –Neutral 5.09 2.83 – –ANOVA F = 256.01*** F = 77.83*** – –Post-hoc Pos > NeuPos > NegNeu > Neg Neg > Neu Pos > Neu

Pictures namesNegative 3.07 6.48 29 3Positive 6.02 5.73 40 2.78Neutral 4.64 4.34 38 2.56

***uNeg >

*

fcatswletimhcndn

tes1tte2fenna.i

2

ÖeMhw(pteaiiiptTle&

a“

Electroencephalographic data were recorded using an electrode cap (ElectroCapInternational) with tin electrodes. A total of 58 scalp locations homogeneously dis-tributed all over the scalp were used (see Fig. 2). All scalp electrodes, as well as oneelectrode at the left mastoid (M1), were referenced to one electrode placed at theright mastoid (M2). Bipolar horizontal and vertical electrooculogram was recorded

ANOVA F = 98.87*** F = 54.12Post-hoc Pos > NeuPos > NegNeu > Neg Neg > Ne

p < .05, **p < .01, ***p < .001, +statistical trend p < .1, n.s. non-significant, df: 2,34.

or each of the three emotional categories were selected according to the followingriteria that were contrasted via one-way analysis of variance (ANOVA; see Table 1)nd post-hoc analyses via the Bonferroni correction (alpha <.05): (a) positive, nega-ive and neutral pictures differed in valence; (b) positive and negative pictures hadimilar arousal and differed from neutral pictures in this dimension; (c) all namesere one word familiar names; (d) the names used by the participants had simi-

ar frequency of use in Spanish (Alameda & Cuetos, 1995); and (e) the names werequated in word length. Only 18 positive, 18 negative and 18 neutral pictures methese restricted criteria. They were also matched in physical attributes and complex-ty. Table 1 summarizes mean values in arousal and valence for pictures, as well as

ean word frequency, word length, arousal and valence for picture names. All post-oc analyses were in the expected direction with one exception. Even though theomparison between negative and positive pictures in the arousal dimension wasot significant (5.34 for negative pictures vs 5.24 for positive pictures, there wereifferences in the arousal ratings that subjects gave to the corresponding picturesames (6.48 for negative picture names vs 5.73 for positive picture names).

All pictures were presented twice during the experimental session, once withhe target grapheme present and once with the target grapheme absent. Repetitionffects have been found to influence brain waves by increasing the amplitude ofeveral components after the first presentation of stimulus (Doyle, Rugg, & Wells,996; Rugg et al., 1998). However, it seems that the amplitude is not affected afterhe second presentation of the stimuli. Moreover, in the case of emotional stimulihis effect seems to be homogeneously distributed and there is no evidence for differ-ntial repetition modulation among affective stimulus categories (Olofsson & Polich,007; Rozenkrants, Olofsson, & Polich, 2008). Even though these findings precluderom attributing to repetition effects any possible modulation of the ERPs that differ-ntiates between stimulus types, participants saw the pictures in the computer andamed them before the practice sequence. Therefore, once the experiment startedone of the images were presented for the first time to subjects. This procedure alsollowed us to ensure that all participants knew the names of the objects (they made

85 naming errors on average). Correct picture names were reported to participantsn the few cases they made an error.

.3. Procedure

Participants had to perform a grapheme monitoring task (Hauk et al., 2001;zdemir, Roelofs, & Levelt, 2007). Phonological codes are considered to be involvedven with visual presentation of stimuli (Jescheniak & Schriefers, 2001; Zubicaray,cMahon, Eastburn, & Wilson, 2002). Also, Spanish is a transparent language that

as a fairly shallow orthography with regular spelling-to-sound correspondences,hich makes it difficult to distinguish between graphemic and phonemic effects

Ardila, 1991). Given this close correspondence, the grapheme monitoring task wasreferred to the classical phonological monitoring task since it allows the presen-ation of all the stimuli in the same sensorial modality. Stimuli were presented tovery participant in two sequences with the same proportion of negative, positivend neutral pictures, as well as yes/no responses. Every stimulus was presented oncen each of the sequences. The order of sequences was counterbalanced across partic-pants. For each category, the target letter was present in half of the pictures whereast was absent in the other half. In those images in which the target grapheme wasresent, it could appear equiprobably at a random position between the first third,he second third and the last third of the number of syllables of the picture name.arget letters were always consonants, since vowels are known to be associated to

onger production times and consonants and vowels seem to be represented differ-ntly in the brain (Caramazza, Chialant, Capasso, & Miceli, 2000; Carreiras, Vergara,Perea, 2009).

Fig. 1 exemplifies the experimental procedure. Each trial began with a questionbout the target grapheme that was presented for 500 ms (for instance, “Is there aB”?). A blank screen replaced the question for 500 ms, immediately followed by a

F = .53n.s. F = 1.5n.s.

PosPos > Neu

picture for 1000 ms. The intertrial interval was 3000 ms. Participants were instructedto press the left index finger if the target letter was present in the picture name andthe right index finger if the picture name did not comprise the target grapheme in thefirst sequence. The assignment was reversed for the other sequence. They were toldto minimize blinking. A 5 min break was allowed between the two test sequences. Apractice sequence was presented before the first experimental sequence. As alreadyexplained, previous to the practice sequence and with the purpose of minimizingconfounding repetition effects and to ensure that they used the intended names,participants saw the pictures in a computer monitor.

2.4. Data acquisition

Fig. 1. Schematic illustration of the stimulation paradigm (¿tiene una “T”? = Is therea “T”?).

1728 J.A. Hinojosa et al. / Neuropsychologia 48 (2010) 1725–1734

F ive ani repre

fs−

2

taAt

rewm(d

thio1twccoeItborwIttDwtGo

ps

ig. 2. Grand averaged ERPs elicited by (a) grapheme monitoring in negative, positmages, at a selected sample of representative electrodes (F3, F4, P3, P4). Scales are

or artifact rejection purposes. Electrode impedances were kept bellow 5 K�. Theignal was recorded continuously with a bandpass from .1 to 50 Hz (3 dB points for6 dB octave roll-off) and digitization sampling rate was set to 250 Hz.

.5. Data analysis

Trials with RTs longer than 2000 ms or shorter than 200 ms were excluded fromhe analyses. In addition, those trials with incorrect responses were eliminated. RTsnd errors were analyzed by means of repeated-measures ANOVAs with the factorffect type (three levels: positive, negative and neutral) and post-hoc analyses with

he Bonferroni correction (alpha < .05) where appropriate.Average ERPs from −200 to 800 ms after stimulus onset were computed sepa-

ately for all the experimental conditions. Data were baseline corrected using thentire 200 ms before picture onset. Muscle artifacts, drifts, and amplifier blockingsere removed by visual inspection. Offline correction of eye movement artifacts wasade, using the method described by Semlitsch, Anderer, Schuster, and Preelich

1986). After the averaging of every stimulus category, originally M2-referencedata were re-referenced to the average of the mastoids.

Components explaining most ERP variance were detected and quantifiedhrough covariance-matrix-based temporal principal analysis (tPCA). This methodas been repeatedly recommended since the exclusive use of traditional visual

nspection of grand averages and voltage computation may lead to several typesf misinterpretation (Chapman & McCrary, 1995; Coles, Gratton, Kramer, & Millar,986; Dien, 2010; Foti, Hajcak, & Dien, 2009). The main advantage of tPCA overraditional procedures based on visual inspection of recordings and on temporalindows of interest is that it presents each ERP component separately and with its

lean shape, extracting and quantifying it free of the influences of adjacent or subja-ent components. Indeed, the waveform recorded at a site on the head over a periodf several hundreds of milliseconds represents a complex superposition of differ-nt overlapping electrical potentials. Such recordings can stymie visual inspection.n brief, tPCA computes the covariance between all ERP time points, which tendso be high between those time points involved in the same component and lowetween those belonging to different components. The solution is therefore a setf independent factors made up of highly covarying time points, which ideally cor-espond to ERP components. Temporal factor score, the tPCA-derived parameter inhich extracted temporal factors may be quantified, is linearly related to amplitude.

n the present study, the number of components to select was based on the screeest (Cliff, 1987). Extracted components were submitted to promax rotation, sincehis rotation has found to give the best overall results for temporal PCA (Dien, 2010;ien, Beal, & Berg, 2005). Repeated-measures ANOVAs on temporal factor scoresere carried out. Two within-subjects factors were included in the ANOVA: Affect

ype (three levels: positive, negative and neutral), and electrode (58 levels). Thereenhouse–Geiser epsilon correction was applied to adjust the degrees of freedomf the F-ratios where necessary.

Signal overlapping may also occur at the space domain. At any given timeoint, several neural processes (and hence, several electrical signals) may occur,o the recording at any scalp location at that moment is the electrical balance

d neutral picture names, and (b) passive viewing of negative, positive and neutralsented at the F3 electrode.

of these different neural processes. While temporal PCA “separates” ERP compo-nents along time, spatial PCA (sPCA) separate ERP components along space, eachspatial factor ideally reflecting one of the concurrent neural processes underlyingeach temporal factor. Additionally, sPCA provides a reliable division of the scalpinto different recording regions, an advisable strategy prior to statistical contrasts,since ERP component frequently show a different behavior in some scalp areasthan in others (e.g., they present different polarity or react differently to exper-imental manipulations). Basically, each region or spatial factor is composed bythe scalp points where recordings tend to covary. As a result, the shape of thesPCA-configured regions is functionally based and scarcely resembles the shapeof the geometrically configured regions defined by traditional procedures. More-over, each spatial factor can be quantified through the spatial factor score, a singleparameter that reflects the amplitude of the whole spatial factor. Therefore, sPCAswere carried out for those temporal factors that were sensitive to our experi-mental manipulations. Again, the number of factors to select was based on thescree tests, and extracted factors were submitted to promax rotation. Repeated-measures ANOVAs on the spatial factors with respect to Affective Type were carriedout (three levels: positive, negative and neutral). Again, the Greenhouse–Geiserepsilon correction was applied to adjust the degrees of freedom of the F-ratios,and follow-up planned comparisons with the Bonferroni correction (alpha < .05)were made for determining the significance of pairwise contrasts where appropri-ate.

3. Results

3.1. Behavioral data

Participants were faster when identifying the graphemes innames corresponding to neutral pictures (mean RT = 1181 ms) thanthose corresponding to negative (1241 ms) or positive (1208 ms)pictures. The overall ANOVA showed that this difference was sig-nificant (F2,66 = 8.6; p < .005). The result of the post-hoc analysesrevealed that whereas there were no differences between RTs topositive and negative names, they both differed from RTs to neutralnames.

Mean number of errors were 2.6 for the identification ofgraphemes in neutral names, 2.7 in positive names and 3.4 in nega-

tive names. These differences were significant in the overall ANOVA(F2,66 = 3.3; p < .05). However, only the difference between negativeand neutral names was marginally significant (p = .09) according topost-hoc analyses. Table 2 shows the mean and standard deviationof RTs and errors for every stimulus type.

J.A. Hinojosa et al. / Neuropsychol

Table 2Mean and standard deviation values (in parenthesis) corresponding to behavioraland electrophysiological data.

Negative Positive Neutral

Behavior

R

3

iwAwiffspsra

ePtwsoitl(naocpgfpistr

Ff

RT (ms) 1241 (314) 1208 (320) 1181 (319)Errors 3.4 (1,8) 2.7 (1,6) 2.6 (1,5)

T: reaction times.

.2. Electrophysiological data

A selection of the grand averages for all stimulus types is shownn Fig. 2a. These grand averages correspond to those scalp areas

here experimental effects (described later) were most evident.s a consequence of the application of the tPCA, six componentsere extracted from the ERPs. The factor loadings are represented

n Fig. 3. Repeated-measures ANOVAs were carried out on temporalactor scores for Affect type and Electrode. Only TF2 and TF4 wereound to be sensitive to emotion. The effect of Affect type alone wasignificant in both TF2 (F2,66 = 7.34; p < .005) and TF4 (F2,66 = 5.02;< .05). The interaction between Affect type and Electrode was also

ignificant in TF2 (F114,3762 = 4; p < .05). Hereafter and to make theesults easier to understand, these components will be labeled P400nd P100 respectively, due to their latency and polarity.

The sPCA subsequently applied to temporal factor scoresxtracted four spatial factors for P400 and two spatial factors for100. Repeated-measures ANOVAs on P400 and P100 spatial fac-or scores (directly related to amplitudes, as previously indicated)ere carried out for the factor Affect type. For P400, results reached

ignificance in three out of the four spatial factors. In the parieto-ccipital spatial factor (F2,66 = 5.12; p < .05) monitoring graphemesn the names of negative pictures elicited higher amplitudes thanhe letter search in the names of neutral images. Also, in theeft central parietal (F2,66 = 13.43; p < .0001) and the right centralF2,66 = 5.46; p < .05) spatial factors, monitoring graphemes in theames of both positive and negative images elicited enhancedmplitudes as comparing to the grapheme monitoring in the namesf neutral pictures. For P100, analyses showed that in the fronto-entral spatial factor (F2,66 = 4.15; p < .005) searching for letters inositive picture names elicited higher amplitudes that monitoringraphemes in neutral picture names. Significant effects were alsoound in a posterior spatial factor (F2,66 = 3.97; p < .05). However,ost-hoc analyses revealed that even though grapheme monitor-

ng in the names of positive pictures elicited later amplitudes thanearching letters in the names of negative and neutral pictures,he comparison only reached a statistical trend (p = .07 and p = .05,espectively). Table 3 summarizes the results of these analyses and

ig. 3. Grapheme monitoring tPCA: factor loadings after promax rotation. Temporalactors 4 (P100) and 2 (P400) are drawn in black.

ogia 48 (2010) 1725–1734 1729

topographical maps corresponding to these effects are shown inFig. 4a.

The time course and the polarity of the P400 component resem-bles to some extend those of the late parietal positivities (LPC)reported in affective research with pictorial stimuli in a varietyof tasks including affective evaluation (Schupp, Junghöfer, Weike,& Hamm, 2003; Schupp et al., 2004), categorization (De Cesari &Codispoti, 2006), passive viewing (Hajcak & Nieuwenhuis, 2006;Pastor et al., 2008) or indirect tasks (Carretié, Hinojosa, Albert, &Mercado, 2006). Although the modulation of the behavioral mea-sures by the emotional content suggests that the participants wereperforming the grapheme monitoring task, so the P100 and theP400 effects could be related to word production processes, anadditional control experiment was conducted in order to rule outthat these effects were due to the processing of emotional picturesper se. In this experiment participants passively view the same setof images presented during the main study, so no word produc-tion operations were required. By finding a different pattern of ERPmodulations, the results of the main experiment could be unequiv-ocally related to word production stages involved in graphememonitoring.

4. Control experiment: passive viewing task

4.1. Participants

Eighteen subjects (13 females), ranging in age from 18 to 26(M = 20) participated in this study as volunteers. All had normal orcorrected-to-normal vision, and all were right-handed, accordingto the Edinburgh Handedness Inventory (Oldfield, 1971; lateral-ization quotient 86–100%, mean 98%). They gave their informedconsent to participate in the study.

4.2. Stimuli

The stimuli were the same set of 18 positive, 18 negative and18 neutral pictures used in the grapheme monitoring experiment.Again, every picture was presented twice during the recordingsession. Participants saw the images in the computer before thepractice sequence, as in the letter searching experiment.

4.3. Procedure

The stimuli were presented following the criteria described inthe Procedure section of the grapheme monitoring experiment. Theonly difference was that the participants were asked to focus on thescreen and simply watch all of the pictures as they were displayed.

4.4. Data acquisition

Electroencephalographic recording procedures were exactly thesame as in the letter searching experiment, including the use of thesame electrodes and parameters.

Table 3Results of the statistical contrasts on the P400 and P100 spatial factors (Affect type).

Temporal factor Spatial factor ANOVA (df = 2,33)

TF2 (P400) Parieto-occipital F = 5.120, p < .05Anterior F = .150, n.s.Left central parietal F = 13.430, p < .001Right central F = 5.462, p < .01

TF4 (P100) Posterior F = 4.153, p < .05Right temporal F = 3.966, p < .05

TF: temporal factor; df: degrees of freedom; n.s.: non-significant.

1730 J.A. Hinojosa et al. / Neuropsychologia 48 (2010) 1725–1734

F nentsv atial f

4

o

4

tTfr(epl

eAAc

showed that viewing positive images elicited higher amplitudesthan looking at neutral pictures. Differences were also evident at aright temporal factor (F2,34 = 6.5; p < .05). In this case, positive pic-tures were associated with enhanced amplitudes as compared to

ig. 4. Factor scores difference maps of (a) the P400 (TF2) and the P100 (TF4) compoiewing task. Note that individual scales have been used TF: temporal factor; SF: sp

.5. Data analysis

Data were analyzed in the same way as described in Section 2.5f the grapheme monitoring experiment.

.6. Results

Fig. 2b shows a selection of grand averages that corresponds tohose scalp areas where experimental effects were most prominent.he application of the tPCA showed four components extractedrom the ERPs. Fig. 5 represents the factor loadings after the promaxotation. Repeated-measures ANOVAs on temporal factor scoresAffect type and Electrode) showed that only TF4 was sensitive tomotion. Significant results were found for Affect type (F2,34 = 4.34;< .05). This effect will be hereafter labeled P500 because of its

atency and polarity.

The sPCA subsequently applied to temporal factor scores

xtracted four spatial factors for the P500. Repeated-measuresNOVAs on these spatial factor scores were carried out for the factorffect type. The results indicated that differences reached signifi-ance for a posterior factor (F2,34 = 5.3; p < .05). Post-hoc analyses

in the grapheme monitoring task, and (b) the P500 (TF4) components in the passiveactor; NEG: negative; NEU: neutral; POS: positive.

Fig. 5. Passive viewing tPCA: factor loadings after promax rotation. Temporal factors4 (P500) is drawn in black.

J.A. Hinojosa et al. / Neuropsychol

Table 4Results of the ANOVAs (“Affect type”) on all P500 extracted spatial factors.

Temporal factor Spatial factor Affect type (df = 2,17)

TF4 (P500) Posterior F = 5.296, p < .05Central F = 2.691, n.s.

T

boe

4v

tnsateepnb

masiPatfftwlmTptts

eBTwe(dec

bfisHP&pH

Frontal F = 1.089, n.s.Right temporal F = 6.455, p < .01

F: temporal factor; df: degrees of freedom; n.s.: non-significant.

oth negative and neutral pictures.1 Table 4 summarizes the resultsf these analyses and topographical maps corresponding to theseffects are shown in Fig. 4b.

.7. Comparison between grapheme monitoring and passiveiewing tasks

The results of the two experiments clearly show a different pat-ern of ERP effects associated with monitoring graphemes in theames of emotional pictures as compared to passively viewing theame emotional pictures. Whereas the former task modulated themplitude of P100 and P400 components, the later task influencedhe amplitude of a P500 component. In the particular case of lateffects, a number of important differences suggested that the P400ffect found in the grapheme monitoring task and the P500 com-onent reported in the passive viewing of emotional pictures wasot the same effect. First, there was a 100 ms latency differenceetween both effects.

Second, the amplitude of the P400 and the P500 was differentlyodulated by the task. Monitoring graphemes in both negative

nd positive picture names elicited higher P400 amplitudes thanearching letters in neutral image names. Contrary to this find-ng, only the passive viewing of positive pictures elicited enhanced500 amplitudes than viewing neutral images. Moreover lookingt positive pictures elicited higher amplitudes than viewing nega-ive images at right temporal scalp locations. Task differences wereurther explored by means of an ANOVA on the temporal scoreactors of P400 component reported in the grapheme monitoringask and the P500 effects found in the passive viewing task. Theithin-subjects factors Affect type (three levels) and Electrode (58

evels) and the between-subjects factor Task (two levels: graphemeonitoring and passive viewing) were included in the analyses.

he significant effects found in the Task by Electrode (F57,2850 = 5.2;< .005), Task by Affect type (F2,100 = 3.3; p < .05) and Task by Affect

ype by Electrode (F114,5700 = 2.2; p < .05) interactions corroboratedhat P400 effects found in the grapheme monitoring task were notimilar to P500 effects reported in the passive viewing experiment.

Finally, topographical analyses indicated that P400 effectsxtended from parieto-occipital to bilateral-central scalp regions.y contrast, P500 effects were confined to posterior electrodes.o further assess whether these components were distinguishableith regard to their scalp distributions, overall amplitude differ-

nces were eliminated by normalization with the vector method

profile analyses; McCarthy & Wood, 1985). This method involvesividing the voltage at each electrode by vector length across alllectrodes within each condition in the two tasks. An ANOVA wasarried out on these scaled data with the within-subjects factors of

1 Although late ERP amplitude effects have been typically found to be higher foroth positive and negative as compared to neutral stimuli in affective research, thending of specific effects for positive stimuli is not rare in previous literature. In fact,everal studies have reported similar results to those found here (Delplanque, Lavoie,ot, Silvert, & Sequira, 2004; Delplanque et al., 2006; Herbert, Kissler, Junghofer,eyk, & Rockstroh, 2006; Herbert et al., 2008; Kissler et al., 2009; Schapkin, Gusev,Kuhl, 2000). This ‘positivity bias’ seems more likely to occur in tasks than do not

romote deep encoding strategies such as the one used in this experiment (seeerbert et al., 2006, 2008 for a detailed discussion on this issue).

ogia 48 (2010) 1725–1734 1731

Affect type (three levels) and Electrode (58 levels) and the between-subjects factor of Task (two levels). Significant effects in any of theinteractions involving Task and Electrode in the ANOVA of thesedata indicate that there are topographical differences independentof overall ERP activity. The results of these analyses confirmed thedifferences in the scalp distributions of the P400 and P500 com-ponents since the interactions of Task by Electrode (F57,2850 = 4.5;p < .05) and Task by Affect type by Electrode (F114,5700 = 5.12; p < .05)reached significance.

5. Discussion

Word production is a complex multistage process linking con-ceptual representations, lexical entries, phonological forms andarticulation (Levelt, 2001; Levelt et al., 1999). The time course ofthe different stages has been well established in several previousstudies (Indefrey & Levelt, 2004; Levelt et al., 1998). However, theimpact of affective content in word production remained to be spec-ified. The current study attempted at elucidating this question inpart by investigating the influence that emotion exerts on a taskthat emphasized the retrieval of the segmental content that occursduring phonological encoding. The finding of higher amplitudes inboth an early and a late positive component, as well as slowed reac-tion times for emotional as compared to neutral stimuli suggeststhat affect modulates word production at several processing stages.

It is generally assumed that reaction times are sensitive to par-ticipants’ decision-making processes and task-related strategies(Kounios & Holcomb, 1992; Zhang, Lawson, Guo, & Jiang, 2006). Inthe current study, identifying graphemes in the names of positiveand negative pictures was associated with slower reaction timesthan in neutral picture names. Although, to the best of our knowl-edge, there are no previous data with picture naming tasks, somestudies have reported delayed reaction times for the processingof emotional words (especially for negative ones) as compared toneutral words with several indirect tasks including Stroop and lex-ical decisions (Carretié et al., 2008; Estes & Adelman, 2008; McKayet al., 2004; Pratto & John, 1991; Wentura, Rothermund, & Bak,2000). Such slowed responses were thought to indicate that atten-tion to emotional information diverts processing resources awayfrom task performance (Estes & Adelman, 2008). Our results suggestthat these effects might also generalize to the production of emo-tional words. Thus, it seems likely that emotional content disruptsthe access to the phonological properties of words during picturenaming due to the engagement of attention in the processing ofaffective information.

The grapheme monitoring task has been thought to triggerphonological encoding processes that is, the retrieval of word formproperties, or even the transition between phonological encod-ing and articulation (Hauk et al., 2001; Wheeldon & Levelt, 1995).These processes were proposed to take place between 250 and450 ms (Indefrey & Levelt, 2004), a suggestion that has been cor-roborated by the findings of several ERP and MEG studies (Hauk etal., 2001; Laganaro, Morand, & Schnider, 2009; Laganaro, Morand,Schwitter, et al., 2009; Schiller, Bles, & Jansma, 2003; Vihla et al.,

2006). In agreement with the results of the analysis of reactiontimes, monitoring graphemes in names corresponding to positiveand negative pictures also elicited an enhanced positivity as com-pare to the grapheme searching in neutral pictures around 400 ms2

2 Although within the time window that have been proposed for phonologicalencoding to occur, it should be noted that the latency of the P400 found in thepresent study is slightly delayed in comparison with the latency of phonologicalencoding-related positivities reported in other studies. This discrepancy might bepartly due to word frequency effects. Phonological encoding has been proved to beslowed in low frequency words up to 60 ms (Jescheniak & Levelt, 1994; Levelt et

1 sychol

alccmt2paiie

(tpsmmatrpIeoptIiitatcws

iPitpewgrtrtape2Meipwa

awas

732 J.A. Hinojosa et al. / Neurop

fter stimuli onset at bilateral-central and parieto-occipital scalpocations. Similar amplitude enhancements in several late latencyomponents by emotional content have been reported in wordomprehension research. They have been taken to index an auto-atic withdrawn of resources from the ongoing cognitive task due

o a privileged processing of affective information (Carretié et al.,008; Keil, 2006; Kissler et al., 2009). Thus, our data might be inter-reted as suggesting that the presence of emotional content attractsttention, prompting the allocation of further processing resourcesn a way that interferes with the retrieval of word properties dur-ng phonological encoding, which is reflected in the amplitudenhancement of the positive component.

In terms of the lexical access model proposed by Levelt et al.1999), Levelt (2001), phonological encoding can be divided inwo planning stages. During ‘segmental spell out’ the individualhonemes of a word and their ordering are retrieved. The number ofyllables and the location of the lexical stress form part of the infor-ation being retrieved during ‘metrical spell out’. Segmental andetrical information is further combined during segment-to-frame

ssociation. These retrieved segments are computed incremen-ally and syllabified according to universal and language-specificules during ‘syllabification’. The temporal course of some of theserocesses has been studied by Schiller, Bles, and Jansma (2003).

n particular, these authors explored the time course of metricalncoding (by indicating whether the picture name had an initialr final stress) and syllabification (by pressing a key when the firstostvocalic consonant belonged to the first syllable and withholdhe response if the consonant belonged to the second syllable).t was found that both processes equally modulated ERPs activ-ty around 375 ms. Although the retrieval of the segmental contents a central process in grapheme monitoring tasks, the results ofhe study by Schiller et al. suggest that at least some of the oper-tions involved in phonological encoding occur approximately athe same time. Therefore, we are likely to conclude that emotionalontent impacts phonological encoding without further specifyinghich of the operations might be involved. Clearly, this question

hould be the topic of future investigations.The existence of subtle differences in the processing of affective

nformation between negative and positive picture names in the400 component deserves some consideration. Letter monitoringn positive and negative picture names was associated with ampli-ude enhancement in central electrodes as compared to neutralicture names. However, this effect extended to parieto-occipitallectrodes in the particular case of negative picture names. Thisider topographical distribution of the activity elicited by the

rapheme search in negative picture names might be tentativelyelated to arousal effects. In the current study, even though posi-ive and negative pictures were matched in arousal values, subjectsated negative picture names as being more arousing than posi-ive picture names. Research on pictorial information processingnd on word comprehension has shown that several ERP com-onents are especially sensitive to the arousal dimension of themotional experience (Hinojosa, Carretié, Méndez-Bértolo, et al.,009; Kissler, Assadollahi, & Herbert, 2006; Schupp et al., 2004).oreover, long latency components seem to be particularly influ-

nced by arousal, showing larger amplitudes as the level of arousal

ncreases (Olofsson, Nordin, Sequeira, & Polich, 2008). Thus, weropose that ERP activity related to phonological encoding duringord production might be also particularly sensitive to the arousal

spects of the affective content of the stimuli.

l., 1998; Indefrey & Levelt, 2004). It should be noted that relatively low frequencyords were used in this study (36 per 2 million) as compared to those used in Hauk

nd collaborators’ (200 per million) or Levelt and collaborators’ (100 per million)tudies.

ogia 48 (2010) 1725–1734

The amplitude enhancement of a positive component around100 ms that was associated to the letter search in names corre-sponding to positive pictures as compared to negative and neutralstimuli was an unexpected finding of the present study. Due toits latency it seems unlikely that this effect might be related tophonological encoding processes that have been shown to occurlater in time (Hauk et al., 2001; Indefrey & Levelt, 2004). The P1component has been related to the mobilization of automatic atten-tional resources (see Hopfinger & Mangun, 2001 for a review).Moreover, several studies have found larger amplitudes of thiswave for emotional as compared to neutral stimuli (Bernat, Bunce,& Shevrin, 2001; Carretié, Hinojosa, Martín-Loeches, & Mercado,2004; Carretié et al., 2009; Scott et al., 2009). However, in thepresent study the modulation of the P1 found in the graphememonitoring task cannot be attributed to affective effects triggeredby the emotional images per se, since the passive viewing of thesame pictures in the control experiment did not modulate earlyERP activity.

To our knowledge, P1 effects have not been previously reportedin ERP research on word production, no matter whether the tasksused imposed specific demands on phonological encoding or inother processes among those postulated to be involved in lan-guage production. However, activations in the right occipital cortexwithin the first 150 ms have been found in a MEG study witha covert picture naming task (Levelt et al., 1998). These effectswere related to the access of the lexical concept in this study.Also, the timing of our effects falls into the time course estimatedby Indefrey and Levelt (2004) in their meta-analysis study forconceptual preparation to take place. According to the prevalenttheoretical model in language production, the speaker performs achain of specific operations before a word is produced. This stagemodel assumes that those processes involved in language produc-tion would occur, even if the experimental task place strongerdemands in some of them. Therefore, even though the task usedin the present study was originally designed to study some of theaspects involved in phonological encoding, the time course of ourearly effect suggests that it could be reflecting conceptual prepara-tion processes.

Another issue concerns the specificity of the P100 amplitudeenhancement in relation to the monitoring of graphemes in posi-tive picture names. In language comprehension research with ERPs,early amplitude enhancements for positive words have been inter-preted as a manifestation of a ‘positive offset’ (Carretié et al., 2008;Herbert et al., 2006; Kanske & Kotz, 2007). It has been arguedthat the positive motivational approach system is activated morestrongly than the negative motivational withdrawal system by lowlevels of arousal input (Cacioppo & Gardner, 1999). The results ofthe present study suggest that this claim might be extended toearly effects in language production since the names of the pos-itive images were rated by the participants as being less arousingthan those corresponding to negative pictures. Therefore, we mighttentatively interpret the P100 amplitude enhancement as reflectingthe operation of the positive motivational approach system duringthe activation of the lexical concepts of the names correspondingto positive pictures. The exact interpretation of this early effect,however, remains an open question for future research.

The present study constitutes a general first attempt to exploreaffective and language production interactions. Therefore, it isimportant to note several limitations. First, the possible influenceof the position of the monitored grapheme across emotional cat-egories could not be determined. The few number of stimuli that

would be involved in such a comparison would not allow to estab-lish a clear pattern of results. Second, our methods did not allow toexamine whether the age of acquisition of the picture names hasa different impact on the monitoring of graphemes in the namesof positive, negative and neutral pictures. This also holds for the

sychol

ibStscDqfcaw

aotraoa

A

ipI

R

A

A

A

A

B

B

B

C

C

C

C

C

C

C

C

C

CC

J.A. Hinojosa et al. / Neurop

nteraction between emotion and some other variables that haveeen shown to modulate language production in several studies.ome of these parameters include the phonological complexity ofhe picture names (Goldrick & Larson, 2008), the frequency of theyllables (Laganaro & Alario, 2006), or the imageability of the con-ept (Binder, Medler, Desai, Conant, & Liebenthal, 2005; Graves,esai, Humphries, Seidenberg, & Binder, in press). These importantuestions can be addressed in future work. An interesting avenueor future research would be also to explore whether emotionalontent influences other processing stages (e.g., lexical selection,rticulation) by using different tasks such as go-no go or picture-ord interference paradigms.

In conclusion, previous research has shown that emotion inter-cts with language comprehension at several levels and stagesf the processing. However, the question of the impact of affec-ive information in language production remained unexplored. Theesults of the present study revealed that emotional content wasble to influence the retrieval of the word form properties thatccurs during the phonological encoding stage of word productionnd possibly during conceptual preparation.

cknowledgement

The authors would like to thank Arturo Míguez for his helpn stimulus preparation and data collection. This work was sup-orted by grant PSI2009-08607 from the Ministerio de Ciencia e

nnovación of Spain.

eferences

bdel Rahman, R., & Sommer, W. (2003). Does phonological encoding in speechproduction always follow the retrieval of semantic knowledge? Electrophysio-logical evidence for parallel processing. Cognitive Brain Research, 16, 372–382.

bdel Rahman, R., van Turennout, M., & Levelt, W. J. M. (2003). Phonological encodingis not contingent on semantic feature retrieval: An electrophysiological studyon object naming. Journal of Experimental Psychology: Learning, Memory, andCognition, 5, 850–860.

lameda, J. R., & Cuetos, F. (1995). Diccionario de frecuencias de las unidades lingüís-ticas del castellano. Oviedo: Universidad de Oviedo.

rdila, A. (1991). Errors resembling semantic paralexias in Spanish-speaking apha-sics. Brain and Language, 41, 437–445.

ernat, E., Bunce, S., & Shevrin, H. (2001). Event-related brain potentials differentiatepositive and negative mood adjectives during both supraliminal and subliminalvisual processing. International Journal of Psychophysiology, 42, 11–34.

inder, J. R., Medler, D. A., Desai, R., Conant, L. L., & Liebenthal, E. (2005). Some neuro-physiological constraints on models of word naming. Neuroimage, 27, 677–693.

onin, P., Chalard, M., Méot, A., & Barry, C. (2006). Are age-of-acquisition effects onobject naming due simply to differences in object recognition? Comments ofLevelt (2002). Memory & Cognition, 34, 1172–1182.

acioppo, J. T., & Gardner, W. L. (1999). Emotion. Annual Review of Psychology, 50,191–214.

aramazza, A., Chialant, D., Capasso, R., & Miceli, G. (2000). Separable processing ofconsonants and vowels. Nature, 403, 428–430.

arreiras, M., Vergara, M., & Perea, M. (2009). ERP correlates of transposed-letterpriming effects: The role of vowels versus consonants. Psychophysiology, 46,34–42.

arretié, L., Hinojosa, J. A., Albert, J., López-Martín, S., de la Gándara, B. S., Igoa, J. M.,et al. (2008). Modulation of ongoing cognitive processes by emotionally intensewords. Psychophysiology, 45, 188–196.

arretié, L., Hinojosa, J. A., Albert, J., & Mercado, F. (2006). Neural response to sus-tained affective visual stimulation using an indirect task. Experimental BrainResearch, 174, 630–637.

arretié, L., Hinojosa, J. A., López-Martín, S., Albert, J., Tapia, M., & Pozo, M. A.(2009). Danger it worse when it moves: Neural and behavioral indices ofenhanced attentional capture by dynamic threatening stimuli. Neuropsychologia,47, 364–369.

arretié, L., Hinojosa, J. A., Martín-Loeches, M., Mercado, F., & Tapia, M. (2004). Auto-matic attention to emotional stimuli: Neural correlates. Human Brain Mapping,22, 290–299.

atling, J. C., & Johnston, R. A. (2006). Effects of age of acquisition on an object nameverification task. British Journal of Psychology, 97, 1–18.

hapman, R. M., & McCrary, J. W. (1995). EP component identification and measure-ment by principal component analysis. Brain and Cognition, 27, 288–310.

liff, N. (1987). Analyzing multivariate data. New York: Harcourt Brace Jovanovich.odispoti, M., Ferrari, V., & Bradley, M. M. (2007). Repetition and event-related

potentials: Distinguishing early and late processes in affective picture percep-tion. Journal of Cognitive Neuroscience, 19, 577–586.

ogia 48 (2010) 1725–1734 1733

Coles, M. G. H., Gratton, G., Kramer, A. F., & Miller, G. (1986). In M. G. H. Coles, E.Donchin, & S. W. Porges (Eds.), Psychophysiology: Systems, processes and applica-tions (pp. 183–221). Amsterdan: Elsevier.

De Cesari, A., & Codispoti, M. (2006). Effects of stimulus size on affective modulation.Pscyhophysiology, 43, 207–215.

Dell, G. S., Schwartz, M. F., Martin, N., Saffran, E. M., & Gagnon, D. A. (1997). Lexicalaccess in aphasic and nonaphasic speakers. Psychological Review, 104, 801–838.

Delplanque, S., Lavoie, M., Hot, P., Silvert, L., & Sequira, H. (2004). Modulation of cog-nitive processing by emotional valence studied through event-related potentialsin humans. Neuroscience Letters, 356, 1–4.

Delplanque, S., Silvert, L., Hot, P., Rigoulot, S., & Sequira, H. (2006). Arousal andvalence effects on event-related P3a and P3b during emotional categorization.International Journal of Psychophysiology, 60, 315–322.

Dent, K., Johnston, R. A., & Humphreys, G. W. (2008). Age of acquisition and wordfrequency effects in picture naming: A dual-task investigation. Journal of Exper-imental Psychology: Learning, Memory, and Cognition, 34, 282–301.

Dien, J. (2010). Evaluating two-step PCA of ERP data with Geomin, Infomax, Oblimin,Promax, and Varimax rotations. Psychophysiology, 47, 170–183.

Dien, J., Beal, D. J., & Berg, P. (2005). Optimizing principal components analysis ofevent-related potentials: Matrix type, factor loading weighting, extraction, androtations. Clinical Neurophysiology, 116, 1808–1825.

Dillon, D. G., Cooper, J. J., Grent-t‘-Jong, T., Woldorff, M. G., & La Bar, K. S. (2006).Dissociation of event-related potentials indexing arousal and semantic cohesionduring emotional word encoding. Brain and Cognition, 62, 43–57.

Doyle, M. C., Rugg, M. D., & Wells, T. (1996). A comparison of the electrophysiologicaleffects of normal and repetition priming. Psychophysiology, 33, 132–147.

Estes, Z., & Adelman, J. S. (2008). Automatic vigilance for negative words in lexicaldecision and naming: Comment on Larsen, Mercer, and Balota (2006). Emotion,8, 441–444.

Eulitz, C., Hauk, O., & Cohen, R. (2000). Electroencephalographic activity over tem-poral brain areas during phonological encoding in picture naming. ClinicalNeurophysiology, 111, 2088–2097.

Foti, D., Hajcak, G., & Dien, J. (2009). Differentiating neural responses to emotionalpictures: Evidence from temporal-spatial PCA. Psychophysiology, 46, 521–530.

Goldrick, M., & Larson, M. (2008). Phonotactic probability influences speech produc-tion. Cognition, 107, 1155–1164.

Graves, W. W., Grabowski, T. J., Mehta, S., & Gordon, J. K. (2007). A neural signatureof phonological access: Distinguishing the effects of word frequency from famil-iarity and length in overt picture naming. Journal of Cognitive Neuroscience, 19,617–631.

Graves, W. W., Deasai, R., Humphries, C., Seidenberg, M. S., & Binder, J. R. (in press).Neural systems for reading aloud: A multiparametric approach. Cerebral Cortex.

Hajcak, G., & Nieuwenhuis, S. (2006). Reappraisal modulates the electrocorticalresponse to unpleasant pictures. Cognitive, Affective, & Behavioral Neuroscience,6, 291–297.

Hauk, O., Rockstroh, B., & Eulitz, C. (2001). Grapheme monitoring in picture nam-ing: An electrophysiological study of language production. Brain Topography, 14,3–13.

Herbert, C., Junghofer, M., & Kissler, J. (2008). Event-related potentials to emotionaladjectives during reading. Psychophysiology, 45, 487–498.

Herbert, C., Kissler, J., Junghofer, M., Peyk, P., & Rockstroh, B. (2006). Processing ofemotional adjectives: Evidence from startle EMG and ERPs. Psychophysiology,43, 197–206.

Hinojosa, J. A., Carretié, L., Méndez-Bértolo, C., Míguez, A., & Pozo, M. A. (2009).Arousal contributions to affective priming. Emotion, 9, 164–171.

Hinojosa, J. A., Carretié, L., Valcárcel, M. A., Méndez-Bértolo, C., & Pozo, M. A.(2009). Electrophysiological differences in the processing of affective informa-tion in words and pictures. Cognitive, Affective & Behavioral Neuroscience, 9, 173–189.

Hopfinger, J. B., & Mangun, G. R. (2001). Electrophysiological studies of reflexiveattention. In C. L. Folk, & B. S. Gibson (Eds.), Attraction, distraction and action:Multiple perspectives on attentional capture (pp. 3–26). Amsterdan: Elsevier.

Howard, D., Nickels, L., Coltheart, M., & Cole-Virtue, J. (2006). Cumulative semanticinhibition in picture naming: Experimental and computational studies. Cogni-tion, 100, 464–482.

Indefrey, P., & Levelt, W. J. M. (2004). The spatial and temporal signatures of wordproduction components. Cognition, 92, 101–144.

Jescheniak, J. D., & Levelt, W. J. M. (1994). Word frequency effects in speech pro-duction: Retrieval of syntactic information and of phonological form. Journal ofExperimental Psychology: Learning, Memory, and Cognition, 20, 824–843.

Jescheniak, J. D., & Schriefers, H. (2001). Priming effects from phonological relateddistracters in picture-word interference. Quarterly Journal of Experimental Psy-chology, 54A, 371–382.

Kanske, P., & Kotz, S. A. (2007). Concreteness in emotional words: ERP evidence froma hemifield study. Brain Research, 1148, 138–148.

Kavé, G., Samuel-Enoch, K., & Adiv, S. (2009). The association between age and thefrequency of nouns selected for production. Psychology and Aging, 24, 17–27.

Keil, A. (2006). Macroscopic brain dynamics during verbal and pictorial processingof affective stimuli. Progress in Brain Research, 156, 217–232.

Kissler, J., Assadollahi, R., & Herbert, C. (2006). Emotional and semantic networks

in visual word processing: Insights from ERP studies. Progress in Brain Research,156, 147–183.

Kissler, J., Herbert, C., Peyk, P., & Junghofer, M. (2007). Buzzwords: Early corticalresponses to emotional words during reading. Psychological Science, 18, 475–480.

Kissler, J., Herbert, C., Winkler, I., & Junghofer, M. (2009). Emotion and attention invisual word processing. Biological Psychology, 80, 75–83.

1 sychol

K

K

L

L

L

L

L

L

L

L

L

L

M

M

M

M

M

N

O

O

O

O

Ö

P

P

P

R

734 J.A. Hinojosa et al. / Neurop

ounios, J., & Holcomb, P. J. (1992). Structure and process in semantic memory:Evidence from event-related brain potentials and reaction times. Journal ofExperimental Psychology: General, 121, 459–479.

uchinke, L., Jacobs, A., Grubich, C., Vo, M. L., Conrad, M., & Herrmann, M. (2005).Incidental effects of emotional valence in single word processing: An fMRI study.Neuroimage, 28, 1022–1032.

aganaro, M., & Alario, F. (2006). On the locus of the syllable frequency effect inspeech production. Journal of Memory and Language, 55, 178–196.

aganaro, M., Morand, S., & Schnider, A. (2009). Time course of evoked potentialchanges in different forms of anomia in aphasia. Journal of Cognitive Neuroscience,21, 1499–1510.

aganaro, M., Morand, S., Schwitter, V., Zimmermann, C., Carmen, C., & Schnider, A.(2009). Electrophysiological correlated of different anomic patterns in compar-ison with normal word production. Cortex, 45, 697–707.

ambon Ralph, M. A., Graham, K. S., Ellis, A. W., & Hodges, J. R. (1998). Naming insemantic dementia-what matters? Neuropsychologia, 36, 775–784.

ang, P. J., Bradley, M. M., & Cuthbert, B. N. (2001). International affective pic-ture system (IAPS): Instruction manual and affective ratings. Technical report A-5.Gainesville, FL: The Center for Research in Psychophysiology, University ofFlorida.

arsen, R. J., Mercer, K. A., Balota, D. A., & Strube, M. J. (2008). Nota ll negative wordsslow down lexical decision and naming speed: Importance of word arousal.Emotion, 8, 445–452.

evelt, W. J. M. (1993). Timing in speech production with special reference to wordform encoding. Annals of the New York Academy of Sciences, 682, 283–295.

evelt, W. J. M. (2001). Spoken word production: A theory of lexical access. Proceed-ings of the National Academy of Sciences, 98, 13465–13471.

evelt, W. J. M., Praamstra, P., Meyer, A. S., Helenius, P., & Salmelin, R. A. (1998). MEGstudy of picture naming. Journal of Cognitive Neuroscience, 10, 553–567.

evelt, W. J. M., Roelofs, A., & Meyer, A. S. (1999). A theory of lexical access in speechproduction. Behavioral and Brain Sciences, 22, 1–38.

cCarthy, G., & Wood, C. C. (1985). Scalp distributions of event-related potentials:An ambiguity associated with analysis of variance models. Electroencephalogra-phy & Clinical Neurophysiology, 62, 203–208.

cKay, D., Shafto, M., Taylor, J., Marian, D., Abrams, L., & Dyer, J. (2004). Rela-tions between emotion, memory, and attention: Evidence from taboo Stroop,lexical decision, and immediate memory tasks. Memory & Cognition, 32, 474–488.

eltzer, J. A., Postman-Caucheteux, W. A., McArdle, J. J., & Braun, A. R. (2009).Strategies for longitudinal neuroimaging studies of overt language production.Neuroimage, 15, 745–755.

orrison, C. M., & Ellis, A. W. (2000). Real age of acquisition effects in word namingand lexical decision. British Journal of Psychology, 91, 167–180.

orrison, C. M., Ellis, A. W., & Quinlan, P. T. (1992). Age of acquisition not wordfrequency, affects object naming, not object recognition. Memory and Cognition,20, 704–714.

aumann, E., Maier, S., Diedrich, O., Becker, G., & Laufer, M. E. (1997). Struc-tural, semantic, and emotion-focused processing of neutral and negative nouns:Event-related potential correlates. Journal of Psychophysiology, 11, 234–256.

kada, K., Smith, K. R., Humphries, C., & Hickok, G. (2003). Word length modulatesneural activity in auditory cortex during covert object naming. Neuroreport, 14,2323–2326.

ldfield, R. C. (1971). The assessment and analysis of handedness: The EdinburghInventory. Neuropsychologia, 9, 9–97.

lofsson, J. K., Nordin, S., Sequeira, H., & Polich, J. (2008). Affective picture processing:An integrative review of ERP findings. Biological Psychology, 77, 247–265.

lofsson, J. K., & Polich, J. (2007). Affective visual event-related potentials: Arousal,repetition, and time-on-task. Biological Psychology, 75, 101–108.

zdemir, R., Roelofs, A., & Levelt, W. J. M. (2007). Perceptual uniqueness point effectsin monitoring internal speech. Cognition, 105, 457–465.

astor, C., Bradley, M. M., Löw, A., Versace, F., Moltó, J., & Lang, P. J. (2008). Affec-tive picture perception: Emotion, context, and the late positive potential. BrainResearch, 1189, 145–151.

ourtois, G., Dan, E. S., Grandjean, D., Sander, D., & Vuilleumier, P. (2005). Enhancedextrastriate visual response to bandpass spatial frequency filtered fearful faces:Time course and topographic evoke-potentials mapping. Human Brain Mapping,26, 65–79.

ratto, F., & John, O. P. (1991). Automatic vigilance: The attention-grabbing powerof negative social information. Journal of Personality and Social Psychology, 61,380–391.

odriguez-Fornells, A., Schmitt, B. M., Kutas, M., & Münte, T. F. (2002). Electrophys-iological estimates of the time course of semantic and phonological encodingduring listening and naming. Neuropsychologia, 40, 778–787.

ogia 48 (2010) 1725–1734

Roelofs, A. (2008). Attention, gaze, shifting, and dual-task interference from phono-logical encoding in spoken word planning. Journal of Experimental Psychology:Human Perception and Performance, 34, 1580–1598.

Rozenkrants, B., Olofsson, J. K., & Polich, J. (2008). Affective visual event-relatedpotentials: Arousal, valence, and repetition effects for normal and distortedpictures. International Journal of Psychophysiology, 67, 114–123.

Rugg, M. D., Mark, R. E., Wall, P., Schloerscheidt, A. M., Birch, C. S., & Allan, K. (1998).Dissociation of the neural correlates of implicit and explicit memory. Nature,392, 595–598.

Sahin, N. T., Pinker, S., Cash, S. S., Schomer, D., & Halgren, E. (2009). Sequential pro-cessing of lexical, grammatical, and phonological information within Broca’sarea. Science, 326, 445–449.

Salmelin, R., Hari, R., Lounasmaa, O. V., & Sams, M. (1994). Dynamics of brain acti-vation during picture naming. Nature, 368, 463–465.

Schacht, A., & Sommer, W. (2009a). Emotions in word and face processing: Early andlate cortical responses. Brain and Cognition, 69, 538–550.

Schacht, A., & Sommer, W. (2009b). Time course and task dependence of emo-tion effects in word processing. Cognitive, Affective, & Behavioral Neuroscience, 9,28–43.

Schapkin, S. A., Gusev, A. N., & Kuhl, J. (2000). Categorization of unilaterally presentedemotional words: An ERP analysis. Acta Neurobiologiae Experimentalis, 60, 17–28.

Schiller, N. O., Bles, M., & Jansma, B. M. (2003). Tracking the time course of phonolog-ical encoding in speech production: An event-related potential study. CognitiveBrain Research, 17, 819–831.

Schuhmann, T., Schiller, N. O., Goebel, R., & Sack, A. T. (2009). The temporal char-acteristics of functional activation in Broca’s are during overt picture naming.Cortex, 45, 1111–1116.

Schupp, H. T., Junghofer, M., Weike, A. I., & Hamm, A. O. (2003). Emotional facilitationof sensory processing in the visual cortex. Psychological Science, 14, 7–13.

Schupp, H. T., Junghofer, M., Weike, A. I., & Hamm, A. O. (2004). The selective process-ing of briefly presented pictures: An ERP analysis. Psychophysiology, 41, 441–449.

Scott, G. C., O’Donnell, P. J., Leuthold, H., & Sereno, S. C. (2009). Early emotion wordprocessing: Evidence from event-related potentials. Biological Psychology, 80,95–104.

Semlitsch, H. V., Anderer, P., Schuster, P., & Preelich, O. (1986). A solution for reliableand valid reduction of ocular artifacts applied to the P300 ERP. Psychophysiology,23, 695–703.

Smith, N. K., Cacioppo, J. T., Larsen, J. T., & Chartrand, T. L. (2003). May I have yourattention please: Electrophysiological responses to positive and negative stim-uli. Neuropsychologia, 41, 171–183.

Smith, B. M., Münte, T., & Kutas, M. (2000). Electrophysiological estimates of the timecourse of semantic and phonological encoding during implicit picture naming.Psychophysiology, 37, 473–484.

Smith, B. M., Schiltz, K., Zaake, W., Kutas, M., & Münte, T. F. (2001). An electrophysi-ological analysis of the time course of conceptual and syntactic encoding duringtacit picture naming. Journal of Cognitive Neuroscience, 13, 510–522.

Van Turennout, M., Hagoort, P., & Brown, C. M. (1997). Electrophysiological evidenceon the time course of semantic and phonological process in speech production.Journal of Experimental Psychology: Learning, Memory and Cognition, 23, 787–806.

Van Turennout, M., Hagoort, P., & Brown, C. M. (1998). Brain activity during speaking:From syntax to phonology in 40 ms. Science, 280, 572–574.

Vihla, M., Laine, M., & Salmelin, R. (2006). Cortical dynamics of visual/semantic vs.phonological analysis in picture confrontation. Neuroimage, 33, 732–738.

Vuilleumier, P., & Pourtois, G. (2007). Distributed and interactive brain mechanismsduring emotion face perception: Evidence from functional neuroimaging. Neu-ropsychologia, 45, 174–194.

Wentura, D., Rothermund, K., & Bak, P. (2000). Automatic vigilance: The attention-grabbing power of approach- and avoidance-related social information. Journalof Personality and Social Psychology, 78, 1024–1037.

Wheeldon, L. R., & Levelt, W. J. M. (1995). Monitoring the time course of phonologicalencoding. Journal of Memory and Language, 34, 311–334.

Wilson, S. M., Lisette Isenberg, A., & Hickok, G. (2009). Neural correlates of wordproduction stages delineated by parametric modulation of psycholinguistic vari-ables. Human Brain Mapping, 30, 30596–33608.

Zhang, Q., & Damian, M. F. (2009). The time course of semantic and orthographicencoding in Chinese word production: An event-related potential study. BrainResearch, 1273, 92–105.

Zhang, Q., Lawson, A., Guo, C., & Jiang, Y. (2006). Electrophysiological correlates ofvisual affective priming. Brain Research Bulletin, 71, 316–323.

Zubicaray, G. I., McMahon, K. L., Eastburn, M. M., & Wilson, S. J. (2002). Orto-graphic/phonological facilitation of naming responses in the picture-word task:An event-related fMRI study using overt vocal responding. Neurimage, 16,1084–1093.


Recommended