首页> 外文期刊>Selected Topics in Signal Processing, IEEE Journal of >A Multimodal Interlocutor-Modulated Attentional BLSTM for Classifying Autism Subgroups During Clinical Interviews
【24h】

A Multimodal Interlocutor-Modulated Attentional BLSTM for Classifying Autism Subgroups During Clinical Interviews

机译:一种用于在临床访谈中进行分类自闭症子组的多模式逆变调节注意力

获取原文
获取原文并翻译 | 示例
       

摘要

The heterogeneity in Autism Spectrum Disorder (ASD) remains a challenging and unsolved issue in the current clinical practice. The behavioral differences between ASD subgroups are subtle and can be hard to be manually discerned by experts. Here, we propose a computational framework that is capable of modeling both vocal behaviors and body gestural movements of the interlocutors with their intricate dependency captured through a learnable interlocutor-modulated (IM) attention mechanism during dyadic clinical interviews of Autism Diagnostic Observation Schedule (ADOS). Specifically, our multimodal network architecture includes two modality-specific networks, a speech-IM-aBLSTM and a motion-IM-aBLSTM, that are combined in a fusion network to perform the final three ASD subgroups differentiation, i.e., Autistic Disorder (AD) vs. High-Functioning Autism (HFA) vs. Asperger Syndrome (AS). Our model uniquely introduces the IM attention mechanism to capture the non-linear behavior dependency between interlocutors, which is essential in providing improved discriminability in classifying the three subgroups. We evaluate our framework on a large ADOS collection, and we obtain a 66.8% unweighted average recall (UAR) that is 14.3% better than the previous work on the same dataset. Furthermore, based on the learned attention weights, we analyze essential behavior descriptors in differentiating subgroup pairs. We further identify the most critical self-disclosure emotion topics within the ADOS interview sessions, and it shows that anger and fear are the most informative interaction segments for observing the subtle interactive behavior differences between these three sub-types of ASD.
机译:在目前的临床实践中,自闭症谱系障碍(ASD)中的异质性仍然是一个具有挑战性和未解决的问题。 ASD子组之间的行为差​​异是微妙的,并且很难被专家手动辨别。在这里,我们提出了一种计算框架,其能够通过在自闭症诊断观察时间表(ADOS)的二级临床访谈期间通过学习互感调制(IM)注意机制来模拟所述对话者的声音行为和身体姿态运动。 。具体地,我们的多模式网络架构包括两个模态的网络,语音IM-ABLSTM和运动IM-ABLSTM,其在融合网络中组合以执行最终的三个ASD子组分化,即自闭症(AD)与高功能自闭症(HFA)与Asperger综合征(AS)。我们的模型唯一地介绍了IM注意力机制,以捕获对话者之间的非线性行为依赖性,这对于在分类三个子组的分类方面提供改善的辨别性至关重要。我们在大型ADOS集合中评估我们的框架,我们获得了66.8%的未加权平均召回(UAR),比上一个在同一数据集上的工作更好14.3%。此外,基于所学到的注意力重量,我们在区分子组对中分析基本行为描述符。我们进一步确定了ADOS面试会议中最关键的自披露情感主题,它表明愤怒和恐惧是用于观察到这三种ASD的微妙互动行为差异的最佳互动段。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号