Recent studies have shown the existence of considerable amount of packet-level redundancy in the network flows. Since application-layer solutions cannot capture the packet-level redundancy, development of new content-aware approaches capable of redundancy elimination at the packet and sub-packet levels is necessary. These requirements motivate the redundancy elimination of packets from an information-theoretic point of view. For efficient compression of packets, a new framework called memory-assisted universal compression has been proposed. This framework is based on learning the statistics of the source generating the packets at some intermediate nodes and then leveraging these statistics to effectively compress a new packet. This paper investigates both theoretically and experimentally the memory-assisted compression of network packets. Clearly, a simple source cannot model the data traffic. Hence, we consider traffic from a complex source that is consisted of a mixture of simple information sources for our analytic study. We develop a practical code for memory-assisted compression and combine it with a proposed hierarchical clustering to better utilize the memory. Finally, we validate our results via simulation on real traffic traces. Memory-assisted compression combined with hierarchical clustering method results in compression of packets close to the fundamental limit. As a result, we report a factor of two improvement over traditional end-to-end compression.
展开▼