首页> 外文期刊>Quality Control, Transactions >Limbs Detection and Tracking of Head-Fixed Mice for Behavioral Phenotyping Using Motion Tubes and Deep Learning
【24h】

Limbs Detection and Tracking of Head-Fixed Mice for Behavioral Phenotyping Using Motion Tubes and Deep Learning

机译:肢体检测和跟踪用于使用运动管和深度学习的行为表型对行为表型的头固定小鼠

获取原文
获取原文并翻译 | 示例

摘要

The broad accessibility of affordable and reliable recording equipment and its relative ease of use has enabled neuroscientists to record large amounts of neurophysiological and behavioral data. Given that most of this raw data is unlabeled, great effort is required to adapt it for behavioral phenotyping or signal extraction, for behavioral and neurophysiological data, respectively. Traditional methods for labeling datasets rely on human annotators which is a resource and time intensive process, which often produce data that that is prone to reproducibility errors. Here, we propose a deep learning-based image segmentation framework to automatically extract and label limb movements from movies capturing frontal and lateral views of head-fixed mice. The method decomposes the image into elemental regions (superpixels) with similar appearance and concordant dynamics and stacks them following their partial temporal trajectory. These 3D descriptors (referred as motion cues) are used to train a deep convolutional neural network (CNN). We use the features extracted at the last fully connected layer of the network for training a Long Short Term Memory (LSTM) network that introduces spatio-temporal coherence to the limb segmentation. We tested the pipeline in two video acquisition settings. In the first, the camera is installed on the right side of the mouse (lateral setting). In the second, the camera is installed facing the mouse directly (frontal setting). We also investigated the effect of the noise present in the videos and the amount of training data needed, and we found that reducing the number of training samples does not result in a drop of more than 5& x0025; in detection accuracy even when as little as 10& x0025; of the available data is used for training.
机译:可负担得起可靠的记录设备的广泛可访问性及其相对易用性使神经科学家能够纪录大量的神经生理学和行为数据。鉴于这种原始数据的大部分是未标记的,需要努力适应行为表型或信号提取,分别用于行为和神经生理数据。标签数据集的传统方法依赖于人类注释器,这是一种资源和时间密集的过程,这通常会产生容易再现性错误的数据。这里,我们提出了一种基于深度学习的图像分割框架,自动从捕获头固定小鼠的前部和侧视图中提取和标记肢体运动。该方法用类似的外观和谐动态分解图像进入元素区域(超像素),并在其部分时间轨迹之后堆叠它们。这些3D描述符(称为运动提示)用于训练深度卷积神经网络(CNN)。我们使用在网络的最后一个完全连接的层中提取的功能来训练长期内存(LSTM)网络,该网络引入了肢体分割的时空连贯。我们在两个视频采集设置中测试了管道。首先,相机安装在鼠标的右侧(横向设置)。在第二个中,相机直接安装面向鼠标(正面设置)。我们还调查了视频中存在的噪声和所需的培训数据量,并且我们发现减少培训样本的数量不会导致超过5&x0025的下降;在检测精度下,即使只有10&x0025;可用数据用于培训。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号