首页> 外文OA文献 >VLSI DESIGN AND IMPLEMENTATION OF ADAPTIVE TWO-DIMENSIONAL MULTILAYER NEURAL NETWORK ARCHITECTURE FOR IMAGE COMPRESSION AND DECOMPRESSION
【2h】

VLSI DESIGN AND IMPLEMENTATION OF ADAPTIVE TWO-DIMENSIONAL MULTILAYER NEURAL NETWORK ARCHITECTURE FOR IMAGE COMPRESSION AND DECOMPRESSION

机译:二维二维多层神经网络体系结构的VLSI设计与实现

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

In this research, adaptive Two-Dimensional Multilayer Neural Network (TDMNN) architecture is proposed, designed and implemented for image compression and decompression. The adaptive TDMNN architecture performs image compression and decompression by automatically choosing one of the three (linear, nonlinear and hybrid) TDMNN architectures based on input image entropy and required compression ratio. The architecture is two-dimensional, 2D to 1D reordering of input image is avoided, as the TDMNN architecture is implemented using hybrid neural network, analog to digital conversion of image input is eliminated. The architecture is trained to reconstruct images in the presence of noise as well as channel errors.AbstractSoftware reference model for Adaptive TDMNN architecture is designed and modeled using Matlab. Modified backpropagation algorithm that can train two-dimensional network is proposed and is used to train the TDMNN architecture. Performance metrics such as Mean Square Error (MSE) and Peak Signal to Noise Ratio (PSNR) are computed and compared with well established DWT-SPIHT technique. There is 10% to 25% improvement in reconstructed image quality measured in terms of MSE and PSNR compared to DWT-SPIHT technique. Software reference model results show that the compression and decompression time for TDMNN architecture is less than 25 ms for image of size 256 x 256, which is 60 times faster than DWT-SPIHT technique. Based on weights and biases of the network obtained from the software reference model VLSI implementation of adaptive TDMNN architecture is carried out. A new hybrid multiplying DAC is designed that multiplies current intensities (analog input) with digital weights. The hybrid multiplier is integrated with adder and network function to realize a hybrid neuron cell. The hybrid neuron cell designed using 1420 transistors works at 200 MHz, consuming less than 232 mW of power, with full scale current of 65.535 μA. Multiple hybrid neurons are integrated together to realize the 2-D adaptive multilayer neural network architecture.
机译:在这项研究中,提出了针对图像压缩和解压缩的自适应二维多层神经网络(TDMNN)体系结构,并对其进行了设计和实现。自适应TDMNN体系结构通过根据输入图像熵和所需的压缩率自动选择三种(线性,非线性和混合)TDMNN体系结构之一来执行图像压缩和解压缩。该架构是二维的,避免了输入图像的2D到1D重新排序,因为TDMNN架构是使用混合神经网络实现的,因此消除了图像输入的模数转换。该体系结构经过训练,可以在存在噪声和通道错误的情况下重建图像。摘要使用Matlab设计和建模了自适应TDMNN体系结构的软件参考模型。提出了一种可以训练二维网络的改进的反向传播算法,并将其用于训练TDMNN体系结构。计算性能指标,例如均方误差(MSE)和峰值信噪比(PSNR),并与完善的DWT-SPIHT技术进行比较。与DWT-SPIHT技术相比,以MSE和PSNR衡量的重建图像质量提高了10%至25%。软件参考模型结果表明,对于256 x 256尺寸的图像,TDMNN体系结构的压缩和解压缩时间少于25 ms,这比DWT-SPIHT技术快60倍。基于从软件参考模型获得的网络的权重和偏差,实现了自适应TDMNN体系结构的VLSI实现。设计了一种新的混合乘法DAC,它将电流强度(模拟输入)与数字权重相乘。混合乘法器与加法器和网络功能集成在一起,以实现混合神经元细胞。使用1420晶体管设计的混合神经元细胞工作在200 MHz,消耗的功率小于232 mW,满量程电流为65.535μA。将多个混合神经元集成在一起,以实现二维自适应多层神经网络体系结构。

著录项

  • 作者

    Raj P.C.P.;

  • 作者单位
  • 年度 2010
  • 总页数
  • 原文格式 PDF
  • 正文语种 {"code":"en","name":"English","id":9}
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号