...
首页> 外文期刊>Trends in Ecology & Evolution >Enhanced GPU-Based Anti-Noise Hybrid Edge Detection Method
【24h】

Enhanced GPU-Based Anti-Noise Hybrid Edge Detection Method

机译:增强基于GPU的抗噪声混合边缘检测方法

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Today, there is a growing demand for computer vision and image processing in different areas and applications such as military surveillance, and biological and medical imaging. Edge detection is a vital image processing technique used as a pre-processing step in many computer vision algorithms. However, the presence of noise makes the edge detection taskmore challenging; therefore, an image restoration technique is needed to tackle this obstacle by presenting an adaptive solution. As the complexity of processing is rising due to recent high-definition technologies, the expanse of data attained by the image is increasing dramatically. Thus, increased processing power is needed to speed up the completion of certain tasks. In this paper,we present a parallel implementation of hybrid algorithm-comprised edge detection and image restoration along with other processes using Computed Unified Device Architecture (CUDA) platform, exploiting a Single Instruction Multiple Thread (SIMT) execution model on a Graphical Processing Unit (GPU). The performance of the proposed method is tested and evaluated using well-known images from various applications. We evaluated the computation time in both parallel implementation on the GPU, and sequential execution in the Central Processing Unit (CPU) natively and using Hyper-Threading (HT) implementations. The gained speedup for the naive approach of the proposed edge detection using GPU under global memory direct access is up to 37 times faster, while the speedup of the native CPU implementation when using shared memory approach is up to 25 times and 1.5 times over HT implementation.
机译:如今,在不同领域和应用中的计算机视觉和图像处理的需求不断增长,以及军事监测和生物和医学成像。边缘检测是一种重要的图像处理技术,用作许多计算机视觉算法中的预处理步骤。但是,噪声的存在使边缘检测任务挑战;因此,需要通过呈现自适应解决方案来解决该障碍而需要图像恢复技术。由于近期高清技术的处理复杂性上升,因此图像获得的数据扩展是急剧增加的。因此,需要增加的处理能力来加速某些任务的完成。在本文中,我们呈现了混合算法的平行实现,以及使用计算的统一设备架构(CUDA)平台的其他过程以及在图形处理单元上利用单个指令多线程(SIMT)执行模型( GPU)。使用来自各种应用的众所周知的图像来测试和评估所提出的方法的性能。我们在GPU上并行实现中的计算时间,并本地地在中央处理单元(CPU)中顺序执行并使用超线程(HT)实现。在全球内存直接访问下使用GPU的提出边缘检测的天真方法的获得加速度速度快37倍,而使用共享内存方法时的天然CPU实现的加速度高达25次,而且超过HT实现的1.5倍。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号