首页> 外文会议>International Conference on Biomedical Engineering >New Architecture For NN Based Image Compression For Optimized Power, Area And Speed
【24h】

New Architecture For NN Based Image Compression For Optimized Power, Area And Speed

机译:基于NN的图像压缩的新架构,用于优化电源,区域和速度

获取原文

摘要

The aim of this paper is to investigate new neural network algorithms, architectures and models which are inspired by the image processing capability of the visual system in the human brain, and subsequently to apply these models to medical imaging(MI) applications. Realizing the neural network architecture on FPGA that supports parallelism and reconfigurability. A sincere attempt is taken up to implement the complex and massive parallel architecture on FPGA optimising speed, area, and power with out compromising on image quality and SNR that is a very important criteria for medical image compression. Multi layered neural network (NN) architecture is proposed for compression of high-resolution image, the architecture is implemented on FPGA as it supports reconfigurability. The architecture considered has N-M-N (64-4-64) multi-layered NN structure, which achieves compression ratio of (CR) 93.75. Compression ratio is reconfigurable with change in M. The architecture is generalized and can achieve compression ratios from 2 to 99, which is reconfigurable. Performance of any neural network architecture for compression depends on training; this architecture considers general back propagation training. The training is performed offline, with known set of image samples consisting most of the properties of any standard image. The Mean Square Error (MSE) computed during every iteration of training is scaled and fed back into the network to update the weight matrix at specific points, this reduces the training time. As the weight matrix occupies more space for storage, the redundancies in the weight matrix are exploited and a storage space is created with minimum memory requirement on FPGA. Compression ratios obtained demonstrate performance superiority of the network as compared with JPEG compression standard. Inserting noise on the compressed sets of data, tests the network performance. The hardware complexity, area requirement, speed is compared and discussed; there is a saving of 22% of space on FPGA, with 40% increase in speed, and reducing the power by 12%. The major advantage of this architecture is its reconfigurability on the architecture size that achieves different compression ratios.
机译:本文的目的是研究新的神经网络算法,架构和模型,这些算法和模型受到人类大脑中视觉系统的图像处理能力的启发,随后将这些模型应用于医学成像(MI)应用。实现支持平行和可重构性的FPGA上的神经网络架构。真诚的尝试是为了在FPGA优化速度,区域和电力上实现复杂和大规模的并行架构,并损害图像质量和SNR,这是医学图像压缩的一个非常重要的标准。多层神经网络(NN)架构被提出用于压缩高分辨率图像,在FPGA上实现了架构,因为它支持重新配置。考虑的结构具有N-M-N(64-4-64)多层NN结构,其实现(Cr)93.75的压缩比。压缩比在M的变化中可重新配置。架构是广义的,并且可以从2到99中实现压缩比,这是可重新配置的。任何用于压缩的神经网络架构的性能取决于培训;该架构考虑了一般的回波传播培训。训练是离线进行的,已知一组图像样本组成的任何标准图像的大多数属性。在训练的每次迭代期间计算的均方误差(MSE)被缩放并反馈到网络中以在特定点更新权重矩阵,这减少了训练时间。随着权重矩阵占用更多空间的存储空间,重量矩阵中的冗余被利用,并且在FPGA上以最小内存要求创建存储空间。与JPEG压缩标准相比,获得的压缩比率证明了网络的性能优势。在压缩数据集上插入噪声,测试网络性能。硬件复杂性,面积要求,速度比较和讨论; FPGA上节省了22%的空间,速度增加40%,并将功率降低12%。这种架构的主要优点是其对架构尺寸的重新配置性,实现了不同的压缩比率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号