首页> 外文期刊>Journal of circuits, systems and computers >A Hardware-Software Co-Design Framework for Real-Time Video Stabilization
【24h】

A Hardware-Software Co-Design Framework for Real-Time Video Stabilization

机译:用于实时视频稳定的硬件 - 软件共同设计框架

获取原文
获取原文并翻译 | 示例

摘要

Live digital video is a valuable source of information in security, broadcast and industrial quality control applications. Motion jitter due to camera and platform instability is a common artefact found in captured video which renders it less effective for subsequent computer vision tasks such as detection and tracking of objects, background modeling, mosaicking, etc. The process of algorithmically compensating for the motion jitter is hence a mandatory preprocessing step in many applications. This process, called video stabilization, requires estimation of global motion from consecutive video frames and is constrainted by additional challenges such as preservation of intentional motion and native frame resolution. The problem is exacerbated in the presence of local motion of foreground objects and requires robust compensation of the same. As such achieving real-time performance for this computationally intensive operation is a difficult task for embedded processors with limited computational and memory resources. In this work, development of an optimized hardware-software co-design framework for video stabilization has been investigated. Efficient video stabilization depends on the identification of key points in the frame which in turn requires dense feature calculation at the pixel level. This task has been identified to be most suitable for offloading the pipelined hardware implemented in the FPGA fabric due to the involvement of complex memory and computation operations. Subsequent tasks to be performed for the overall stabilization algorithm utilize these sparse key points and have been found to be efficiently handled in the software. The proposed Hardware-Software (HW-SW) co-design framework has been implemented on Zedboard FPGA platform which houses Xilinx Zynq SOC equipped with ARM A9 processor. The proposed implementation scheme can process real-time video stream input at 28 frames per second and is at least twice faster than the corresponding software-only approach. Two different hardware accelerator designs have been implemented using different high-level synthesis tools using rapid prototyping principle and consume less than 50% of logic resources available on the host FPGA while being at least 30% faster than contemporary designs.
机译:实时视频是安全,广播和工业质量控制应用中有价值的信息来源。由于相机和平台的运动抖动是一个捕获的视频中发现的常见艺术品,它呈现出对后续计算机视觉任务的诸如检测和跟踪对象,背景建模,拼接等的检测和跟踪等的常用艺术品。算法补偿运动抖动的过程因此,许多应用中强制预处理步骤。该过程称为视频稳定化,需要从连续的视频帧估计全局运动,并且通过额外的挑战来限制,例如保存有意运动和本机帧分辨率。在前景对象的局部运动存在下,问题更加激动,需要强大的补偿。由于这种计算密集型操作的实时性能是具有有限计算和内存资源的嵌入式处理器的困难任务。在这项工作中,已经研究了开发用于视频稳定化的优化硬件软件共同设计框架。有效的视频稳定取决于帧中的键点的识别,这反过来需要在像素电平处进行密集的特征计算。由于复杂内存和计算操作的参与,已经识别出该任务最适合卸载在FPGA结构中实现的流水线硬件。为整体稳定算法执行的后续任务利用这些稀疏的键点,并且已被发现在软件中有效处理。拟议的硬件软件(HW-SW)共设计框架已经在Zedboard FPGA平台上实现,该平台容纳配备臂A9处理器的Xilinx Zynq SoC。所提出的实施方案可以处理每秒28帧的实时视频流输入,并且比仅相应的软件方法快两次。使用不同的高级合成工具使用不同的高级合成工具使用快速原型原理来实现两种不同的硬件加速器设计,并在主机FPGA上使用少于50%的逻辑资源,而不是比当代设计快至少30%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号