Optimal Transport Protocol in Heterogeneous Networks

1 downloads 0 Views 200KB Size Report
throughput with constant bitrate stream. This means that the adaptive protocol has to adapt not just to the physical conditions of the network but to other cross-.
Optimal Transport Protocol in Heterogeneous Networks Ferenc Tóth, Péter Hága, István Csabai, Gábor Vattay Abstract  Nowadays more and more devices (cell phones, notebooks, wireless sensors etc.) are connected to the internet through dierent type of networks (e.g. cell phone network, WIFI, sensor networks). If two device want to share data with each other, then the data packets are transferred through a heterogeneous network. Since the most widely used TCP protocol was designed for wired homogeneous networks it has a poor performance on heterogen networks. There are a lot of attempts to improve the TCP (FAST, Vegas etc.). Our goal is not to enhance the TCP but to create a new transport protocol. This protocol will analyze the properties of the current network and adapt its sending policy (sending rate) to achieve the best transfer on this network. In this paper we describe the basic architecture of the new protocol and present the rst performance test results. These results show that the new protocol is almost not sensitive for the acknowledgement loss and its packet loss sensitivity is close to the theoretical lower bound. The protocol is resistant for jitter too. Keywords  optimal transport protocol, TCP I. Introduction

The congestion control algorithm in the current TCP works well on regular wired networks. However, TCP on "non-classical" networks usually performes badly. Good examples are networks with large bandwidth-delay product. TCP has a lot of versions designed with dierent goals in mind, like FAST TCP, HSTCP, Scalable TCP, TCP Westwood and TCPEifel [1], [3], [5], [4], [6]. Various TCPs use either loss probability or queueing delay to measure congestion. For example TCP Reno mainly uses loss probability [2] and TCP Vegas [7] uses queueing delay. For wireless networks additional problems arise: packets may be corrupted due to fading and shadowing. If the protocol consider the wireless loss as congestion loss, it can easily overcontroll the congestion. To avoid this the protocol has to distinguish accurately the congestion loss from wireless loss [8]. Our task is even more complicated: we want to design a transport protocol for heterogeneous networks. To do this we need to analyze the network more accurate. Beside the RTT and loss probability information the protocol will use estimates of available bandwidth based on packet-train techniques [9]. Recently new challenges appeared for transport protocols. If the demanded throughput of a router is bigger than the router capacity there is a competition among the ows. The current TCP implementations All the authors are with the Communication Networks Laboratory, Eötvös University, Budapest, Hungary. E-mail: {toth, haga, csabai, vattay}@complex.elte.hu

try to achieve fairness. However, some new application does not respect this (e.g. skype) and achieves throughput with constant bitrate stream. This means that the adaptive protocol has to adapt not just to the physical conditions of the network but to other crossow strategies. To achieve good adaptivity we will use several machine learning techniques and game theory. The rest of the paper is organised as follows. In Section 2 we describe the basic architecture of the protocol. Section 3 presents the three main test results. In Section 4 we analyze the adaptivity. The paper is concluded in Section 5. II. The protocol

A. Expectations to the basic protocol Developing the basic protocol we considered the following expectations: •

File transfer

Transferring a le without data loss. A big le could play the role of an endless byte stream. •

Adjustable sending rate

In our conception the congestion control is based on the sending rate contrary to the current TCP philosophy, which uses congestion window. •

Adjustable acknowledgement size.

If the acknowledgement loss is high, it worth to send more and bigger acknowledgement to transfer enough information back to the sender. If this ratio is small, then it is a better strategy to send less and shorter acknowledgements, to prevent the network from useless trac. • Reactive sending algorithm The sender has to resend the lost packet again immediately after it turns out that the packet is lost. This ensures that the number of sent but not arrived packets would be the smallest. Thus the used buer in the sender side would be the smallest too.

B. The architecture of the protocol Right now our protocol is UDP based, but it is likely that in time we will implement it as a kernel module. Figure 1 shows the architecture of the protocol. It uses two UDP socket connections. One for the data packets and one for the acknowledgements. The protocol has to deal with the packet loss and the jitter. The sender splits the le into data packets. Every packet has a 32 bit length identier. This is quite big to unambiguously identify the packets. Both the sender and the receiver stores a missing list. On the sender side this containes the identiers of the packets which has been sent but not arrived any acknowledgement 1

Using the Document Class IEEEtran.cls III. The performance test results

The performance result presented in this paper was tested on one machine. The packet and acknowledgement loss was generated articially with a software module as well as the jitter.

Number of sent packets

Fig. 1. The basic architecture of the protocol. The protocol opens two UDP socket connection.

on it. The missing list of the receiver containes those identiers which are smaller than the biggest arrived packet identier and for which packets have not yet arrived.

C. Acknowledgement structure The receiver sends acknowledgements for the received data packets. The acknowledgements containes missing list of the receiver. If an acknowledgement arrives to the sender, it updates its own missing list: from the missing list of the acknowledgement the sender can determine which packets have arrived to the receiver and which have been lost. Thus an acknowledgement theoretically contains "acknowledgements" for more than one packet. If an acknowledgement has been lost, we do not have to send it again. It is enough to send the current one, because it containes the information of the previous one. This causes that the acknowledgement losses does not eect substantially sending performance of the protocol. Even if the acknowledgement loss is zero, the sender needs to send at least n/(1 − p) packets to transfer the le (n is the number of packets and p is the data loss probability). The implemented protocol approaches this theoretical lower limit quite well. As we mentioned above it is possible to send just a part of the missing list. The size of this part is also an adjustable variable which the protocol will change to adapt to the current network (as well as the sending rate). The sender processes the acknowledgement immediately after it arrives. This means two things. At rst the sender deletes from its missing list those packets, which arrived according to the acknowledgement. Secondly it resends one os the packets from the acknowledgement missing list. The exact rule is the following. Let A be a packet in the acknowledgement missing list. Let B be the packet on which the acknowledgement arrived. If A has been resent since the B has been sent, then in this process A will not be resent. Otherwise it will be. This method results that the sender will not send packets neadlessly and there will be no packets in the missing list of the sender which was sent a long time ago. 2

45000 40000 35000 30000 25000 20000 15000 10000 5000 0

1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 Acknowledgement loss rate

0.2 0.1

0 0

0.9 1 0.7 0.8 0.5 0.6 0.3 0.4 Data loss rate 0.2 0.1

Fig. 2. Number of sent data packets with dierent data and acknowldgement loss rate. The transferred le consists of 1953 packets. Since the number of sent packets is proportional to the transfer time, this gure shows the dependence of the transfer time on the two loss types too. The number of sent packets is close to the theoretical minimum.

A. Data Packet and Acknowledgement Loss At rst let us see a theoretical result which was already mentioned above. Assume that there is a data packet loss with probability p, and there is no another network anomaly. Let n be the total number of packets to transmit. The sender has to try to transmit each packet at least ones. This means n ∗ 1 packets. From them approximately n ∗ p packets has been lost. The sender resends these packets and from this n ∗ p2 packets has been lost again. Following this train of thought the sender has to send at least n ∗ (1 + 1/p + 1/p2 + . . .) = n/(1 − p) packets altogether. Figure 2 shows the dependence of the transmission time of a le on data packet and acknowledgement loss. The graph is close to the n/(1 − p) graph. Subtracting this theoretical minimum from the transmission time, and normalizing to the theoretical minimum we obtain the gure 3. This gure shows clearly that the protocol approaches the n/(1 − p) graph very well. It also shows that the acknowledgement loss hardly aects the transmission time. This is due to the big information content of the acknowledgements.

B. Jitter The jitter was created articially with a software module. All packets get a positive additional delay so that the order of the packets does not change. The jitter is generated according to a realistic model. This model contains a router with innite buer size. The protocol transfers data through this router. There is an additional background trac. This is generated according to a random distribution. Due to the two

Condential

HSN 2006 Spring 0.06 The increment rate

0.05

Test results The y = 0 line

0.04 0.03 0.02 0.01

fastest transfer we can directly use this function to set the sending rate. The protocol has to consider that some of the properties are changing and so the transmission function changes dinamically too. From the transmission function the protocol can determine the possible equilibriums. If all the users try to achieve the best equilibrium they can maximize their throughput while sharing the bandwidth in a fair manner. Similar approach is used in [10] and [11].

0 -0.01 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Acknowledgement loss rate Fig. 3. The dierence between the measured number of sent packets and the theoretical minimum relatively to the theoretical minimum.

tracs the waiting queue of the router grows and this causes the delay. The delay is proportional to the legth of the waiting queue. We tested the eect of the jitter with dierent random distributions. All the results show that the protocol is not sensible for the jitter. The transferring time increased only with the delay of the last packet. IV. Toward the adaptivity

As described in the introduction adaptivity is a difcult problem. In this section we introduce the basics of our adaptivity approach. The protocol will adjust its control parameters based on the measured network properties to achieve its goal. As an end-to-end protocol our algorithm will determine the following network properties: • Physical bandwidth. • Available bandwidth. • Delay • RTT (Round Trip Time). • Data loss probability. We will estimate the available bandwidth with packet-train techniques [9]. The controll parameters are : Sending rate. • Packet size. • Size (or equivalently the information content ) of the acknowledgement. •

Balatonkenese, Hungary

V. Conclusions

The diversity of the networks over the internet grows. This induces new demands to the transmission protocols. We try to deal with these demands by developing an adaptive protocol. In this paper we introduced the basic architecture of this protocol. The test results shows that the currently implemented protocol is a good base for the adaptive algorithm. Acknowledgements

The authors thank the partial support of the National Science Foundation (OTKA T37903), the National Oce for Research and Technology (NKFP 02/032/2004 and NAP 2005/ KCKHA005) and the EU IST FET Complexity EVERGROW Integrated Project. References

FAST TCP Homepage, http://netlab.caltech.edu/FAST/. Vegas Home Page, http://www.cs.arizona.edu/protocols/. HSTCP Side, http://www.icir.org/oyd/hstcp.html. TCP Westwood Home Page, http://www.cs.ucla.edu/NRL/hpi/tcpw/. [5] Scalable TCP: improving performance in highspeed networks, http://www.deneholme.net/tom/scalable/. [6] The TCP-Eifel Page, http://iceberg.cs.berkeley.edu/downloads/tcpeifel/. [7] L. Brakmo and L. Peterson, TCP Vegas: End to End Congestion Avoidance on a Global Internet. , IEEE Journal on Selected Areas in Communication, Vol 13, No. 8 (October 1995). [8] Dhiman Barman and Ibrahim Matta, A Bayesian Approach for TCP to Distinguish Congestion from Wireless Losses., Technical Report BUCS-2003-030. [9] P. Haga, K. Diriczi, G. Vattay and I. Csabai, Understanding packet pair separation beyond the uid model: The key role of trac granularity., Proceedings of IEEE INFOCOM 2006 Conference, 23-29 April 2006, Barcelona, Spain (2006). [10] György Ottucsák, Tuan Anh Trinh, Sándor Molnár, Dynamic Parameter Setting in Delay Based TCP Versions., Technical Report, Department of Telecommunications and Telematics, Technical University of, Budapest, Hungary, 2005. [11] T. A. Trinh and S. Molnár, A game-theoretic analysis of TCP Vegas., in QofIS, 2004, pp. 338-347. [1] [2] [3] [4]

The protocol can have dierent goals: Fastest transfer. • Streaming transfer (packet loss is accepted up to a certain level). • Selsh or fair behaviour. (or the combination of them) •

Modelling the transmission function can be a useful tool to achieve adaptivity. This function shows the available throughput with dierent sending rate. Based on the network properties the protocol can determine the transmission function. If the goal is the 3