【24h】

Small files storing and computing optimization in Hadoop parallel rendering

机译:Hadoop并行渲染中的小文件存储和计算优化

获取原文

摘要

The Hadoop framework has been widely used in the animation industry to build a large scale, high performance parallel render system. However, Hadoop Distributed File System (HDFS) and MapReduce programming model are designed to manage large files and suffer performance penalty while rendering and storing small RIB files in rendering system. Therefore, method that merging small RIB files based on two intelligent algorithms is proposed to solve the problem. The method uses Particle Swarm Optimization (PSO) and Support Vector Machine (SVM) to choose the optimal merge value for any scene file, by mainly considering the rendering time, memory limitation and other indicators. Then, the method takes advantage of frame-to-frame coherence to merge RIB files at an interval way with the optimal merge value. Finally, the proposed method is compared with the naive method under three different render scenes. Experimental results show that the proposed method significantly reduces the number of RIB files and render tasks, and improves the storage efficiency and computing efficiency of RIB Files.
机译:Hadoop框架已广泛应用于动画行业,以建立大规模,高性能并行渲染系统。但是,Hadoop分布式文件系统(HDFS)和MapReduce编程模型旨在管理大文件并在渲染系统中呈现和存储小肋条文件时遭受性能损失。因此,提出了基于两个智能算法合并小肋条文件的方法来解决问题。该方法使用粒子群优化(PSO)并支持向量机(SVM)来为任何场景文件选择最佳合并值,主要考虑渲染时间,内存限制和其他指标。然后,该方法利用帧到帧的连贯性以与最佳合并值的间隔方式合并肋文件。最后,将所提出的方法与三种不同渲染场景下的天真方法进行比较。实验结果表明,该方法明显减少了肋条文件的数量和渲染任务,并提高了肋条文件的存储效率和计算效率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号