首页> 外文会议>International Joint Conference on Neural Networks >Scale and translation invariant learning of spatio-temporal patterns using longest common subsequences and spiking neural networks
【24h】

Scale and translation invariant learning of spatio-temporal patterns using longest common subsequences and spiking neural networks

机译:使用最长的公共子序列和尖峰神经网络进行时空模式的尺度和翻译不变学习

获取原文

摘要

The ability to detect human actions or gestures is key for a wide range of applications that involve interactions between humans and robots. These actions are patterns that have a particular spatio-temporal structure. This paper presents an approach for encoding such patterns using spike-timing networks with axonal conductance delays. The proposed method brings the following contributions: first, it enables the encoding of patterns in an unsupervised manner. Second, it allows us to create models of specific patterns using a very small set of training samples, in contrast with standard pattern recognition approaches that typically require large amounts of training data. Based on these models, the method further enables classification of new patterns using a longest-common subsequence approach for matching between patterns of activated neurons. Third, the approach is invariant to scale and translation and thus it enables generalization across multiple scales and positions. Fourth, the approach also enables early recognition of patterns from only partial information about the pattern. The proposed method is validated on a set of gestures representing the digits from 0 to 9, extracted from video data of a human drawing the corresponding digits. The results are also compared with other state of the art pattern recognition algorithms.
机译:检测人类动作或手势的能力对于涉及人与机器人之间交互的广泛应用至关重要。这些动作是具有特定时空结构的模式。本文提出了一种使用带有轴突传导延迟的尖峰定时网络对这种模式进行编码的方法。所提出的方法带来以下贡献:首先,它使得能够以无监督的方式对模式进行编码。其次,与通常需要大量训练数据的标准模式识别方法相比,它允许我们使用很少的训练样本集来创建特定模式的模型。基于这些模型,该方法还可以使用最长公共子序列方法对新模式进行分类,以在激活的神经元的模式之间进行匹配。第三,该方法在缩放和转换方面是不变的,因此可以跨多个比例和位置进行概括。第四,该方法还能够仅从有关模式的部分信息中尽早识别模式。在表示从0到9的数字的一组手势上验证了所提出的方法,该手势是从绘制相应数字的人的视频数据中提取的。还将结果与其他现有的模式识别算法进行比较。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号