首页> 外文OA文献 >Feed forward neural networks and genetic algorithms for automated financial time series modelling
【2h】

Feed forward neural networks and genetic algorithms for automated financial time series modelling

机译:前馈神经网络和遗传算法可用于自动财务时间序列建模

摘要

This thesis presents an automated system for financial time series modelling. Formal and appliedudmethods are investigated for combining feed-forward Neural Networks and Genetic Algorithms (GAs) into audsingle adaptive/learning system for automated time series forecasting. Four important research contributionsudarise from this investigation: i) novel forms of GAs are introduced which are designed to counter theudrepresentational bias associated with the conventional Holland GA, ii) an experimental methodology forudvalidating neural network architecture design strategies is introduced, iii) a new method for network pruningudis introduced, and iv) an automated method for inferring network complexity for a given learning task isuddevised. These methods provide a general-purpose applied methodology for developing neural networkudapplications and are tested in the construction of an automated system for financial time series modelling.udTraditional economic theory has held that financial price series are random. The lack of a priori models onudwhich to base a computational solution for financial modelling provides one of the hardest tests of adaptiveudsystem technology. It is shown that the system developed in this thesis isolates a deterministic signal withinuda Gilt Futures prices series, to a confidences level of over 99%, yielding a prediction accuracy of over 60%udon a single run of 1000 out-of-sample experiments.udAn important research issue in the use of feed-forward neural networks is the problems associatedudwith parameterisation so as to ensure good generalisation. This thesis conducts a detailed examination ofudthis issue. A novel demonstration of a network's ability to act as a universal functional approximator forudfinite data sets is given. This supplies an explicit formula for setting a network's architecture and weights inudorder to map a finite data set to arbitrary precision. It is shown that a network's ability to generalise isudextremely sensitive to many parameter choices and that unless careful safeguards are included in theudexperimental procedure over-fitting can occur. This thesis concentrates on developing automated techniquesudso as to tackle these problems.udTechniques for using GAs to parameterise neural networks are examined. It is shown that theudrelationship between the fitness function, the GA operators and the choice of encoding are all instrumentaludin determining the likely success of the GA search. To address this issue a new style of GA is introducedudwhich uses multiple encodings in the course of a run. These are shown to out-perform the Holland GA on audrange of standard test functions. Despite this innovation it is argued that the direct use of GAs to neuraludnetwork parameterisation runs the risk of compounding the network sensitivity issue. Moreover, in theudabsence of a precise formulation of generalisation a less direct use of GAs to network parameterisation isudexamined. Specifically a technique, artficia1 network generation (ANG), is introduced in which a GA isudused to artificially generate test learning problems for neural networks that have known network solutions.udANG provides a means for directly testing i) a neural net architecture, ii) a neural net training process, andudiii) a neural net validation procedure, against generalisation. ANG is used to provide statistical evidence inudfavour of Occam's Razor as a neural network design principle. A new method for pruning and inferringudnetwork complexity for a given learning problem is introduced. Network Regression Pruning (NRP) is audnetwork pruning method that attempts to derive an optimal network architecture by starting from what isudconsidered an overly large network. NRP differs radically from conventional pruning methods in that itudattempts to hold a trained network's mapping fixed as pruning proceeds. NRP is shown to be extremelyudsuccessful at isolating optimal network architectures on a range of test problems generated using ANG.udFinally, NRP and techniques validated using ANG are combined to implement an Automated Neuraludnetwork Time series Analysis System (ANTAS). ANTAS is applied to the gilt futures price series The LongudGilt Futures Contract (LGFC).
机译:本文提出了一种金融时间序列建模的自动化系统。研究了将前馈神经网络和遗传算法(GA)组合到用于自动时间序列预测的 udingle自适应/学习系统中的形式方法和应用方法。这项调查有四项重要的研究贡献/勇气:i)引入了新形式的遗传算法,旨在解决与常规Holland GA相关的无代表性的偏差; ii)引入了用于验证神经网络架构设计策略的实验方法, iii)介绍了一种用于网络修剪的新方法,并且iv)设计了一种针对给定学习任务推断网络复杂性的自动化方法。这些方法为开发神经网络 ud应用程序提供了通用的应用方法,并在构建金融时间序列建模自动化系统中进行了测试。 ud传统经济学理论认为,金融价格序列是随机的。缺乏用于金融建模计算解决方案的先验模型提供了最难的自适应 udsystem技术测试之一。结果表明,本文开发的系统将确定性信号隔离到了Gilt期货价格系列内,置信度超过99%,单次运行1000次,得出的预测精度超过60%。前馈神经网络的使用中一个重要的研究问题是与参数化相关的问题,以确保良好的概括性。本文对此问题进行了详细的研究。给出了网络充当 udfinite数据集的通用泛函逼近器的能力的新颖证明。这提供了一个明确的公式,用于设置网络的体系结构和权重,以将有限的数据集映射为任意精度。结果表明,网络的泛化能力对许多参数选择极其敏感,除非在实验过程中包括仔细的保护措施,否则可能会导致过度拟合。本文着重于开发自动化技术来解决这些问题。 ud研究了使用遗传算法对神经网络进行参数化的技术。结果表明,适应度函数,GA运算符和编码选择之间的不相关性都是决定GA搜索可能成功的工具。为了解决这个问题,我们引入了一种新的GA样式 ud,它在运行过程中使用了多种编码。在标准测试功能的范围之外,这些功能的性能优于Holland GA。尽管有这项创新,但有人认为直接使用GA进行神经 udnetwork参数化会带来加剧网络敏感性问题的风险。此外,由于缺乏精确的概括公式,因此无法直接使用GA来进行网络参数化。具体来说,引入了一种artficia1网络生成(ANG)技术,其中,遗传算法被用来人工产生具有已知网络解决方案的神经网络的测试学习问题。 udANG提供了一种直接测试i)神经网络架构的方法, ii)神经网络训练过程,以及 udiii)神经网络验证程序,反对泛化。 ANG被用作神经网络设计原则,为Occam的Razor提供了统计证据。介绍了一种针对给定学习问题的修剪和推断网络复杂度的新方法。网络回归修剪(NRP)是一种 udnetwork修剪方法,它试图通过从认为过大的网络开始来推导最佳网络体系结构。 NRP与常规修剪方法的根本不同之处在于,NRP会尝试在修剪过程进行时使已训练网络的映射保持固定。 NRP在针对使用ANG生成的一系列测试问题中隔离最佳网络体系结构方面表现出极其出色的效果。最后,将NRP和使用ANG验证的技术相结合,以实现自动化的神经 udnetwork时间序列分析系统(ANTAS)。 ANTAS应用于镀金期货价格系列长 udGilt期货合约(LGFC)。

著录项

  • 作者

    Kingdon J.C.;

  • 作者单位
  • 年度 1995
  • 总页数
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号