首页> 外文期刊>IEEE Transactions on Circuits and Systems. II, Express Briefs >Neural implementation of unconstrained minimum L1-norm optimization-least absolute deviation model and its application to time delay estimation
【24h】

Neural implementation of unconstrained minimum L1-norm optimization-least absolute deviation model and its application to time delay estimation

机译:无约束最小L1-范数优化-最小绝对偏差模型的神经实现及其在时延估计中的应用

获取原文
获取原文并翻译 | 示例
           

摘要

Least absolute deviation (LAD) optimization model, also called the unconstrained minimum L1-norm optimization model, has found extensive applications in linear parameter estimations. L1-norm model is superior to Lp-norm (p>1) models in non-Gaussian noise environments or even in chaos, especially for signals that contain sharp transitions (such as biomedical signals with spiky series or motion artifacts) or chaotic dynamic processes. However, its implementation is more difficult due to discontinuous derivatives, especially compared with the least-squares model (L2-norm). In this paper, neural implementation of LAD optimization model is presented, where a new neural network is constructed and its performance in LAD optimization is evaluated theoretically and experimentally. Then, the application of the proposed LAD neural network (LADNN) to time delay estimation (TDE) is presented. In TDE, a given signal is modeled using the moving average (MA) model. The MA parameters are estimated by using the LADNN and the time delay corresponds to the time index at which the MA coefficients have a peak. Compared with higher order spectra (HOS)-based TDE methods, the LADNN-based method is free of the assumption that the signal is non-Gaussian and the noises are Gaussian, which is closer to real situations. Experiments under three different noise environments, Gaussian, non-Gaussian and chaotic, are conducted to compare the proposed TDE method with the existing HOS-based method.
机译:最小绝对偏差(LAD)优化模型,也称为无约束最小L1-norm优化模型,已在线性参数估计中得到广泛应用。在非高斯噪声环境或什至在混沌中,L1-norm模型优于Lp-norm(p> 1)模型,特别是对于包含急剧过渡的信号(例如具有尖峰序列或运动伪像的生物医学信号)或混沌动态过程。但是,由于不连续的导数,其实现更加困难,尤其是与最小二乘模型(L2-范数)相比。本文提出了LAD优化模型的神经实现方法,构造了一个新的神经网络,并从理论和实验上评估了其在LAD优化中的性能。然后,提出了所提出的LAD神经网络(LADNN)在时间延迟估计(TDE)中的应用。在TDE中,使用移动平均(MA)模型对给定信号进行建模。通过使用LADNN估计MA参数,并且时间延迟对应于MA系数达到峰值的时间索引。与基于高阶谱(HOS)的TDE方法相比,基于LADNN的方法没有信号是非高斯且噪声是高斯的假设,这更接近实际情况。进行了三种不同的噪声环境(高斯,非高斯和混沌)下的实验,以将建议的TDE方法与现有的基于HOS的方法进行比较。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号