首页> 外文OA文献 >Efficient Methods on Reducing Data Redundancy in the Internet
【2h】

Efficient Methods on Reducing Data Redundancy in the Internet

机译:减少Internet中数据冗余的有效方法

摘要

The transformation of the Internet from a client-server based paradigm to a content-based one has led to many of the fundamental network designs becoming outdated. The increase in user-generated contents, instant sharing, flash popularity, etc., brings forward the needs for designing an Internet which is ready for these and can handle the needs of the small-scale content providers. The Internet, as of today, carries and stores a large amount of duplicate, redundant data, primarily due to a lack of duplication detection mechanisms and caching principles. This redundancy costs the network in different ways: it consumes energy from the network elements that need to process the extra data; it makes the network caches store duplicate data, thus causing the tail of the data distribution to be swapped out of the caches; and it causes the content-servers to be loaded more as they have to always serve the less popular contents. In this dissertation, we have analyzed the aforementioned phenomena and proposed several methods to reduce the redundancy of the network at a low cost. The proposals involve different approaches to do so--including data chunk level redundancy detection and elimination, rerouting-based caching mechanisms in information-centric networks, and energy-aware content distribution techniques. Using these approaches, we have demonstrated how we can perform redundancy elimination using a low overhead and low processing power. We have also demonstrated that by using local or global cooperation methods, we can increase the storage efficiency of the existing caches many-fold. In addition to that, this work shows that it is possible to reduce a sizable amount of traffic from the core network using collaborative content download mechanisms, while reducing client devices' energy consumption simultaneously.
机译:Internet从基于客户端-服务器的范例到基于内容的范例的转换已导致许多基本网络设计变得过时。用户生成内容的增加,即时共享,闪存的普及等带来了设计互联网的需求,该互联网已经为这些需求做好了准备,并且可以满足小型内容提供商的需求。截止到今天,Internet承载并存储了大量重复的冗余数据,这主要是由于缺少重复检测机制和缓存原理。这种冗余使网络付出了不同的代价:它从需要处理额外数据的网络元素中消耗能量;它使网络缓存存储重复的数据,从而导致数据分布的尾部被交换出缓存;并导致内容服务器的加载更多,因为它们必须始终提供不受欢迎的内容。本文对上述现象进行了分析,提出了几种降低网络冗余度的方法。这些提案涉及不同的方法,包括数据块级冗余检测和消除,以信息为中心的网络中基于重路由的缓存机制以及能量感知的内容分发技术。使用这些方法,我们演示了如何使用低开销和低处理能力执行冗余消除。我们还证明了,通过使用本地或全局合作方法,我们可以将现有缓存的存储效率提高很多倍。除此之外,这项工作表明,使用协作内容下载机制可以减少来自核心网络的大量流量,同时降低客户端设备的能耗。

著录项

  • 作者

    Saha Sumanta;

  • 作者单位
  • 年度 2015
  • 总页数
  • 原文格式 PDF
  • 正文语种 en
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号