Caroline Menezes, Clayton Franks, Donna Erickson, University of Toledo
This paper is part of a larger study that examines cross-linguistic perception of sad and happy speech when the information is transmitted semantically (linguistic) or prosodically (affective). Here we examine American English and Japanese speakers’ ability to perceive emotions in Japanese utterances. It is expected that native subjects will be better at perceiving emotion expressed semantically than non-natives because they have access to the semantic information. However, both native speakers and non-native speakers can perceive that a speaker is sad or happy through the affective prosody. These results show that sad and happy are universally expressed the same way even in the auditory modality. Gender differences are also observed cross-culturally.