首页> 美国政府科技报告 >Memory Reconsolidation and Computational Learning
【24h】

Memory Reconsolidation and Computational Learning

机译:记忆重构与计算学习

获取原文

摘要

Memory models are central to Artificial Intelligence and Machine Learning, since memories hold knowledge and their updates are the heart of flexibility and adaptivity. Reconsolidation is a key process of human learning, modifying learned memories with new information. Reconsolidation has also been implicated in various disorders such as PTSD and OCD. Understanding the computational basis of reconsolidation is the focus of this work, as well as employing findings to create an improved memory methodology for a superior thinking machine. Through our research, we revealed basic principles of reconsolidation-like processes and included them in novel models. For the first time our neural memory models allow input dimension not to be constrained to a fixed size, similar to organic memory allocation for memories of greater importance or increased detail. The total number of memories is, in a practical sense, unbounded. Furthermore, beyond the state of the art, our memory system has the ability to process on-line as objects change. These attributes may be very beneficial in psychological modeling. Significantly, we were able to employ our models as powerful engineering tools by using them to recognize and cluster realistic images during change and movement, and to track in highly dynamic environments.

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号