首页> 外文会议>Micro- and Nanotechnology Sensors, Systems, and Applications >Your eyes give you away: pupillary responses, EEG dynamics, and applications for BCI (Conference Presentation)
【24h】

Your eyes give you away: pupillary responses, EEG dynamics, and applications for BCI (Conference Presentation)

机译:你的眼睛给你:BCI的瞳孔反应,EEG动力学和应用程序(会议介绍)

获取原文

摘要

As we move through an environment, we are constantly making assessments, judgments, and decisions about the things we encounter. Some are acted upon immediately, but many more become mental notes or fleeting impressions -- our implicit "labeling" of the world. In this talk I will describe our work using physiological correlates of this labeling to construct a hybrid brain-computer interface (hBCI) system for efficient navigation of a 3-D environment. Specifically, we record electroencephalographic (EEG), saccadic, and pupillary data from subjects as they move through a small part of a 3-D virtual city under free-viewing conditions. Using machine learning, we integrate the neural and ocular signals evoked by the objects they encounter to infer which ones are of subjective interest. These inferred labels are propagated through a large computer vision graph of objects in the city, using semi-supervised learning to identify other, unseen objects that are visually similar to those that are labelled. Finally, the system plots an efficient route so that subjects visit similar objects of interest. We show that by exploiting the subjects' implicit labeling, the median search precision is increased from 25% to 97%, and the median subject need only travel 40% of the distance to see 84% of the objects of interest. We also find that the neural and ocular signals contribute in a complementary fashion to the classifiers' inference of subjects' implicit labeling. In summary, we show that neural and ocular signals reflecting subjective assessment of objects in a 3-D environment can be used to inform a graph-based learning model of that environment, resulting in an hBCI system that improves navigation and information delivery specific to the user's interests.
机译:随着我们通过环境的推动,我们不断对我们遇到的事情进行评估,判断和决定。有些人立即采取行动,但更多的是心理票据或短暂的印象 - 我们世界隐含的“标签”。在此谈话中,我将使用这种标签的生理相关性来描述我们的工作,以构建一个用于3-D环境的有效导航的混合脑 - 计算机接口(HBCI)系统。具体而言,我们在自由观察条件下通过三维虚拟城的一小部分录制来自受试者的脑电图(EEG),扫视和瞳孔数据。使用机器学习,我们整合了他们遇到的物体引起的神经和眼睛信号,以推断哪些主观兴趣。这些推断的标签通过城市中的对象的大型计算机视觉图传播,使用半监督学习来识别视觉上与标记的那些视觉上的未经看法。最后,系统绘制一个有效的路线,以便受试者访问类似的感兴趣对象。我们表明,通过利用受试者的隐式标签,中位搜索精度从25%增加到97%,中位数受试者只需要40%的距离,以见84%的感兴趣物体。我们还发现神经和眼部信号以互补的方式贡献到对象隐含标签的分类器推断。总之,我们表明,反映了3-D环境中对象的主观评估的神经和眼信号可用于通知基于图形的环境的学习模型,从而提高了用于特定的导航和信息传递的HBCI系统用户的兴趣。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号