首页> 外文期刊>Procedia Computer Science >Using Deep Autoencoders for In-vehicle Audio Anomaly Detection
【24h】

Using Deep Autoencoders for In-vehicle Audio Anomaly Detection

机译:使用深度自动控制器进行车载音频异常检测

获取原文
           

摘要

Current developments on self-driving cars have increased the interest on autonomous shared taxicabs. While most self-driving technologies focus on the outside environment, there is also a need to provide in-vehicle intelligence (e.g., detect health and safety issues related with the car occupants). Set within an R&D project focused on in-vehicle cockpit intelligence, the research presented in this paper addresses an unsupervised Acoustic Anomaly Detection (AAD) task. Since data is nonexistent in this domain, we first design an in-vehicle sound event data simulator that can realistically mix background audios (recorded from car driving trips) with normal (e.g., people talking, radio on) and abnormal (e.g., people arguing, cough) event sounds, allowing the generation of three synthetic in-vehicle sound datasets. Then, we explore two main sound feature extraction methods (based on a combination of three audio features and mel frequency energy coefficients) and propose a novel Long Short-Term Memory Autoencoder (LSTM-AE) deep learning architecture for in-vehicle sound anomaly detection. Competitive results were achieved by the proposed LSTM-AE when compared with two state-of-the-art methods, namely a dense Autoencoder (AE) and a two-stage clustering.
机译:自动驾驶汽车目前的发展增加了对自治共享出租车的兴趣。虽然大多数自动驾驶技术专注于外界环境,但也需要提供车载情报(例如,检测与汽车占用者相关的健康和安全问题)。在R& D项目中侧重于车载驾驶舱智能,本文提出的研究介绍了无监督的声学异常检测(AAD)任务。由于该域中的数据不存在,我们首先设计一个车载声音事件数据模拟器,可以与正常(例如,人们说话,收音机)和异常(例如,人们争论)逼真地混合背景音频(从汽车驾驶旅行中录​​制)(例如,人们争论,咳嗽)事件声音,允许生成三个合成车载声音数据集。然后,我们探索两个主要的声音特征提取方法(基于三个音频特征和MEL频率系数的组合),并提出一种用于车载声音异常检测的新型长期短期内存自动化器(LSTM-AE)深度学习架构。与两个最先进的方法相比,所提出的LSTM-AE实现了竞争结果,即致密的自动化器(AE)和两阶段聚类。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号