首页> 外文会议>IEEE International Conference on Fuzzy Systems >Hemodynamic Response Analysis for Mind-Driven Type-writing using a Type 2 Fuzzy Classifier
【24h】

Hemodynamic Response Analysis for Mind-Driven Type-writing using a Type 2 Fuzzy Classifier

机译:使用2型模糊分类器进行心驱动打字的血流动力学响应分析

获取原文

摘要

We study the vowels detection from brain activation due to vowel sound imageries. At first we experimentally determine the maximum and relatively longer activation that takes place in the frontal or pre frontal lobe during vowel sound Imagination using acquired electroencephalographic signal analysis. Then we capture pre-frontal or frontal vowel sound imagery using a functional near infrared device to extract certain statistical features. Differential evolution based feature selection is used to for dimensionality reduction. The reduced feature set is then used to design an interval type 2 fuzzy classifier to classify the vowels from the pre frontal or frontal f-NIRs response to vowel sound imagination. Experiments undertaken confirm that the proposed classifier outperforms its competitors in classification accuracy for each vowel sound imagery class. They further confirm that the f-NIRs based classification outperforms EEG based modality for better capture of brain activations. Consonants are encoded with two vowel sounds with a space between them. Thus the proposed technique can effectively be used for mind driven type writing of vowels and consonants, serving people suffering from vocal deficiency.
机译:我们研究了由于元音的成像仪因脑激活的元音检测。首先,我们通过获取的脑电图信号分析,通过实验地确定在元音声像期间在正面或预额叶中发生的最大和相对较长的激活。然后,我们使用近红外设备捕获前额或前部元音图像以提取某些统计功能。基于差分演化的特征选择用于减少维数。然后,减少的特征集用于设计间隔类型2模糊分类器,以将元音与前正面或正面F-NIRS响应的响应分类为元音声音。进行的实验证实,该拟议的分类器为每个元音图像类别的分类准确性赢得了竞争对手。他们进一步证实,基于F-NIRS的分类优于基于EEG的模型,以便更好地捕获大脑激活。辅音用两个元音的声音编码,它们之间的空间。因此,所提出的技术可以有效地用于脑力驱动的元音和辅音,为患有声乐缺乏的人提供了伴奏。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号