首页> 美国卫生研究院文献>other >Implicit Processing of Visual Emotions Is Affected by Sound-Induced Affective States and Individual Affective Traits
【2h】

Implicit Processing of Visual Emotions Is Affected by Sound-Induced Affective States and Individual Affective Traits

机译:声音诱导的情感状态和个人情感特征会影响视觉情感的隐式处理。

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

The ability to recognize emotions contained in facial expressions are affected by both affective traits and states and varies widely between individuals. While affective traits are stable in time, affective states can be regulated more rapidly by environmental stimuli, such as music, that indirectly modulate the brain state. Here, we tested whether a relaxing or irritating sound environment affects implicit processing of facial expressions. Moreover, we investigated whether and how individual traits of anxiety and emotional control interact with this process. 32 healthy subjects performed an implicit emotion processing task (presented to subjects as a gender discrimination task) while the sound environment was defined either by a) a therapeutic music sequence (MusiCure), b) a noise sequence or c) silence. Individual changes in mood were sampled before and after the task by a computerized questionnaire. Additionally, emotional control and trait anxiety were assessed in a separate session by paper and pencil questionnaires. Results showed a better mood after the MusiCure condition compared with the other experimental conditions and faster responses to happy faces during MusiCure compared with angry faces during Noise. Moreover, individuals with higher trait anxiety were faster in performing the implicit emotion processing task during MusiCure compared with Silence. These findings suggest that sound-induced affective states are associated with differential responses to angry and happy emotional faces at an implicit stage of processing, and that a relaxing sound environment facilitates the implicit emotional processing in anxious individuals.
机译:识别面部表情中包含的情绪的能力受情感特征和状态的影响,并且在个体之间差异很大。虽然情感特征在时间上是稳定的,但可以通过环境刺激(如音乐)更快速地调节情感状态,这些音乐可以间接调节大脑状态。在这里,我们测试了放松或令人讨厌的声音环境是否会影响面部表情的隐式处理。此外,我们调查了焦虑和情绪控制的个体特征是否以及如何与该过程相互作用。 32名健康受试者执行了一项隐式的情感处理任务(作为性别歧视任务呈现给受试者),而声音环境则通过以下方式定义:a)治疗性音乐序列(MusiCure),b)噪声序列或c)沉默。在任务之前和之后,通过计算机问卷对个人的情绪变化进行采样。此外,通过纸质和铅笔问卷在单独的会议中评估了情绪控制和特质焦虑。结果表明,在MusiCure条件下,与其他实验条件相比,其情绪更好;在MusiCure过程中,与“噪音”时期的愤怒面孔相比,对快乐面孔的反应更快。此外,与沉默相比,特质焦虑较高的人在MusiCure期间执行内隐情绪处理任务的速度更快。这些发现表明,声音诱发的情感状态与在处理的隐含阶段对生气和高兴的情绪面孔的不同反应相关,并且放松的声音环境促进了焦虑个体的隐式情绪处理。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号