This work identifies the presence of long idle times as the main cause for the high performance degradation suffered by TCP over bursty error environments. After a comprehensive and fully experimental analysis, performed over an IEEE 802.11b real platform, it is derived that the traditional computation that TCP uses for the RTO estimation does not behave properly over channels prone to suffer from bursty errors. The authors propose a modification to that algorithm so as to avoid such an undesirable behavior.
展开▼