Simulation-based Performance Comparison of ... - Semantic Scholar

5 downloads 0 Views 120KB Size Report
We compare the performance of the rate-based protocols against these TCP implemen- tations. Our choice of TCP implementations are as follows: ¯ TCP Reno.
Simulation-based Performance Comparison of TCP-Friendly Congestion Control Protocols Suhaidi Hassan and Mourad Kara School of Computer Studies University of Leeds, Leeds LS2 9JT United Kingdom

[email protected] http://www.scs.leeds.ac.uk/atm-mm

Abstract

dia applications accounts for a significant, and increasing portions of the Internet traffic. Such applications include packet voice, video and multicast bulk data transport. These applications produce traffic that must coexist and share Internet resources with the majority of traffic comprising TCP flows. In addition, they require real-time performance guarantees such as bounded delay and minimum bandwidth. Therefore, supporting these traffic over heterogeneous multi-protocol networks such as the Internet is not a trivial task.

The main purpose of the rate-based congestion control protocols that have been developed to support the deployment of real-time multimedia applications in the Internet is to ensure that the application’s traffic share the network in a fairly and friendly manner with the dominant TCP traffic. In this work we use the network simulator ns-2 to evaluate the performance of two rate-based TCPfriendly (TF) congestion control protocols, namely Rate Adaptation Protocol (RAP) and TCPFriendly Rate Control Protocol (TFRC). We devise a simple method to characterise friendliness and use this method to compare the results to each other and study their behaviour under various network scenarios. Our main goal for these experiments is to evaluate these two protocols and determine which is the best rate-based protocol to support our future work in end-to-end quality of service (QoS) support for streaming multimedia applications. We discover that TFRC works better in most of our experiments.

1

Most of these real-time applications are nonTCP-based, they are usually not built with adequate congestion control, mainly for simplicity reason. These applications produce traffic flows that are considered unresponsive with respect to the congestion control mechanisms deployed in the network. Widespread deployment of these traffic in the Internet threatens fairness to competing TCP traffic and may lead into possible congestion collapse [7]. Congestion collapse is a situation where although the network links are heavily utilised, very little useful work is being done, and packets transmitted will simply be discarded before reaching their final destinations [5].

Introduction

The problem arises because the non-TCP-based The real-time dissemination of continuous audio traffic does not respond to congestion signals and video data via real-time streaming multime- which cause TCP flows to back off, resulting in 1

der certain network conditions. In other words, we intend to determine which of these two rate-based protocols is the best in terms of TCP-friendliness. This paper is organised as follows. In Section 2, we present a brief background and description of the TF rate-based control protocols and introduce the two protocols under study. We describe the experimental design and its rationale in Section 3. In Section 4, we discuss the experiments and subsequently evaluate the performance of these two protocols under different scenarios. Section 5 gives a conclusion.

an unfair bandwidth share. Several approaches have been proposed to overcome this problem. One approach is through the provision of differentiated service class [16] where streaming applications are supported via higher priority class. Another proposed solution is by utilising mechanisms such as scheduling and service disciplines [23, 12], resource reservation [4] and admission control [13], however these approaches are common in guaranteed-service networks such as ATM, but unfortunately not in the Internet environment. Yet another possible approach to overcome the unfairness problem is by deploying end-to-end congestion control mechanisms within such applications, making them adaptive to the network conditions. Since dominant portion of today’s Internet traffic is TCP-based, such congestion control mechanisms must be TCP-friendly. These TF applications must send their data at a rate no higher than that of a TCP connection, operating along the same path [14] thus obtaining approximately the same average bandwidth over the duration of connection as TCP traffic. Recently, several unicast TF rate-based control protocols have been proposed such as in [21, 11, 20, 10, 19]. In these works however, the authors often perform performance evaluation of their work only. Work on comparing performance of two or more rate-based congestion control protocols is somewhat limited. In this paper, we compare the performance of two of these protocols namely the Rate Adaptation Protocol (RAP) [20] and TCP-Friendly Rate Control Protocol (TFRC) [10]. Our comparison is based upon the results obtained from running various experiments using the network simulator ns-2 [1]. Our main goal of this work is to explore TCP friendliness of RAP and TFRC particularly the effects on TCP-friendliness when using different network scenarios and TCP implementations. We are interested in one-to-one performance comparison of the TF protocols with their TCP peer. We believe that this one-to-one comparison would give a clearer picture of their TCP-friendliness. The evaluation is important in order to understand the behaviour of these two protocols operating un-

2

Background Works

and

Related

2.1 TCP-Friendly Rate-based Control Protocols As most real-time streaming multimedia applications are based upon UDP which does not employ congestion control mechanisms, they might seize most of the shared bottleneck bandwidth, thus starving the TCP connection. This problem appears because the UDP traffic does not respond to congestion signals which cause TCP flows to back off. The UDP traffic continues to dominate the bandwidth which negatively affect the throughput of the other ’good’ network citizen. This is a classic example of TCP-unfriendliness [7]. To overcome this problem, it is crucial for such applications to employ TF congestion control mechanisms. In addition, since these type of applications are rate-based, it is more appropriate to utilise the rate-based congestion control mechanism. In the rate-based congestion control scheme, the source does not use congestion window (as in windowbased TCP) to control the amount of data in the network. Instead, the source directly adjusts its sending rate based on what is appropriate for the applications. The rate-based source constantly monitors overall packet loss in the network while sending data. Monitoring is normally done by means of 2

rate is increased additively and repeatedly when there is no congestion, but when congestion occurs, it decreases the rate instantly in multiplicative order. The source’s sending rate is changed by reducing the gap between the transmitted packets, which consequently increases the transmission rate. There are two types of rate adaptation employed by RAP; coarse grain and fine grain adaptation. In coarse grain (CG) adaptation, RAP merely uses AIMD approach for rate adjustment. In fine grain (FG) adaptation however, RAP uses a feedback signal based on the ratio of the short-term exponential moving average of RTT to a long-term exponential moving average of RTT, in addition to AIMD. The use of fine grain adaptation is intended to make RAP more stable and responsive to transient congestion. In this paper, we use both types of RAP adaptation granularities for comparison against the other protocol under study.

loss feedback sent by the receiver. The source then calculates the rate and sends the data appropriately. In order to be TCP-friendly, the source’s average sending rate must not be higher than that achieved by a TCP connection along the same path. Estimation of the steady state throughput of a long live TCP connection, in the absence of timeouts, as proposed by [14, 15, 8] is given by:

Throughput = RkMpl

(1) with k as constant (1.22 or 1.31), depending on the acknowledgement type used, M as Maximum segment (packet) size, R as round trip time experienced by the connection and l as probability of loss [19] (during the lifetime of the connection). From this equation we know that the throughput is inversely proportional to the round trip time and the square root of loss probability. Hence we give special attention to these two parameters in this work. In the following subsection, we discuss the two TF protocols under study.

2.2 Rate Adaptation Protocol (RAP)

2.3 TCP-Friendly Congestion Control Protocol (TFRC)

RAP [20] was developed by Rejaei et. al. at the University of Southern California as part of an end-to-end quality of service architecture. RAP is a loss-based mechanism. It detects congestion based on packet losses, as in TCP. Packet losses are determined when there are gaps in the sequence number of the transmitted packets as well as when transmission timeouts occur. Due to the fact that real-time streaming applications are essentially semi-reliable, RAP decouples congestion control and error control, leaving the application layer to deal with the latter. The receiver module observes these gaps and notifies the sender accordingly. RAP estimates the round-trip time (RTT) called SRTT, and computes the timeouts based on Jacobson/Karel’s algorithm. The protocol utilises the Additive Increase Multiplicative Decrease (AIMD) approach to emulate the TCP behaviour in rate-based environment, and this is important for obtaining a fair share of the bottleneck bandwidth. Using this approach the source’s sending

TFRC [10] is a rate-based end-to-end congestion control protocol which is intended for unicast playback of Internet streaming applications. It was developed at AT&T Center for Internet Research at UC Berkeley (ACIRI) by Floyd et. al. TFRC is a sender-based scheme that works by continuously adjusting the source’s sending rate based on equation (1). TFRC protocol is still under development and its ns simulator code is made available in the ns simulator version 2.1b6 or later. As with RAP, TFRC is source- and loss-based rate control protocol. The sender is to adjust its transmission rate based on observed loss rate and RTT. The adjustment of sending rate to achieve TCP-friendliness is based upon the TCP throughput model described in [17, 8, 18]. Similar to RAP, TFRC uses AIMD to emulate the TCP for a coarse grained behavior and utilises exponential filter to maintain the fine time granularity. However, the sender implicitly uses the two approaches in all rate calculations. The sender uses the slow start technique at the 3

beginning of transmission phase during which it tries to multiplicatively increase its sending rate at every RTT until it detects a loss. Packet losses are identified by gaps in the sequence number of the transmitted packet at the receiver module, as in RAP. The receiver sends feedback to the sender at regular intervals. The sender then adjusts the rate accordingly based on this feedback. When the receiver notices the loss, it immediately notifies the sender and the sender adjusts its sending rate based on the TF equation (1).

Table 1 shows our proposed characterisation scheme. From the table, we consider F r = 1.00 as the excellent value to indicate that the bottleneck bandwidth is fairly shared among the competing TCP and TF flows. In this case both TCP and TF flows obtain about 50% of the bottleneck bandwidth. As TCP bandwidth decreases below this point, the TF flows obtain more bandwidth resulting unfriendliness on behalf of TF flows towards that of TCP. This situation could continue until TF flows monopolise almost all available bandwidth. We regard the value F r > 4.00 as very poor or very TCP-unfriendly. Similarly, as the TCP flows ob3 Experimental Design tain more bandwidth beyond this optimum 50-50 point, the value of Fr decreases linearly to a point 3.1 Description of Experiments and where TCP flows conquer all the available bandwidth. We consider the value F r < 0.25 as very Rationale poor or very TF-unfriendly. The main objective of this work is to compare the performance of RAP and TFRC in terms of their friendliness towards the competing TCP traf- 3.2 Scope of Comparison fic flows under various network conditions. Speci- As generally known, simulations involving TCP fically, we investigate how the throughput of the normally have to deal with the wide range of variflows are affected as a result of the rate adjust- ables, environments and implementation flavours ment process performed by the TF sources when (or variants). This causes difficulty in isolating a competing with their TCP counterparts in dif- particular variable and studying its relation with a ferent network scenarios. In doing this, we mea- particular parameter because of inter-dependency sure the throughput of each rate-based and TCP among these variables. To minimise the related flows respectively and calculate the average bot- problems, we limit our scope of comparison to tleneck bandwidth share of each flow based on those related only to the following experiments: their average throughput. We determine the degree of friendliness (i.e. fairness) of the TF rate Router policy. In this set of experiments based protocols by comparing their average bandwe use Random Early Detection (RED) [9] width share against that of the TCP flows. policy as comparison to the normal DropSimply stated, the friendliness ratio, Fr an be Tail (First-In-First-Out/FIFO). RED policy is expressed as: known for its ability to evenly distribute the T losses across the competing flows and avoid Fr = Tt (2) buffer overflow over a wide range. The unwhere T is the average throughput of the TF fairness due to burstiness of TCP flow is eliflows and Tt is the throughput of the TCP flows. minated by RED router. In the related experIn addition, we need a way to interpret the results iments, we would like to explore the effects of our experiments. Using the value of Fr , we proof using RED router on TCP friendliness of pose a simple method to characterise the friendlithese protocols. ness metric. Since Fr is always inversely proportional to the percentage bandwidth share obtained  Bottleneck delay. A smaller bottleneck delay by TCP flow, we can use this percentage as an inindicates a smaller round-trip time. Connecdicator for our purpose. tions become more aggressive and achieve

P P

4

Fr

Approx. TCP Bandwidth Share (%) Less than 20 20 - 29 30 - 39 40 - 49 ˜ 50 51 - 59 60 - 69 70 - 79 More than 80

Characterization

> 4.00 2.34 - 4.00 1.51 - 2.33 1.04 - 1.50 0.97  1.00  1.03 0.69 - 0.96 0.44 - 0.68 0.25 - 0.43 < 0.25

Very Poor Poor Unsatisfactory Satisfactory Excellent Satisfactory Unsatisfactory Poor Very Poor

Table 1: Characterizing TCP-friendliness larger share of bandwidth with shorter RTT. It is also obvious that a TCP connection with shorter delay can update their window sizes faster than those with longer delays, thus capture higher bandwidth. A good TF protocol must respond to this condition appropriately.

ded in [2]. We compare the performance of the rate-based protocols against these TCP implementations. Our choice of TCP implementations are as follows:

 TCP Reno. Currently, most Internet traffic is Reno-based [22]. Due to this fact, we feel that it is important for us to evaluate the performance of TF protocols against Reno. In addition, Reno includes fast recovery, where the current congestion windows is ’inflated’ by the number of duplicate ACKs the TCP sender has received before receiving a new ACK. The constant update of window size causes periodic oscillation in window size which in turn affect the round trip delay of the packets. This may consequently influence the way TCP Reno captures the available bandwidth. This may have impact on the friendliness towards competing TF protocols. We evaluate TCP Reno against TF protocols in the respective experiments.

 Bottleneck bandwidth. TCP uses Additive Increase Multiplicative Decrease (AIMD) algorithm. At a smaller bottleneck bandwidth, TCP losses become more dominant, thus TCP deviates from the AIMD algorithm paving the way for non-TCP applications to get bigger share of the bandwidth. Again, a good TF protocol must react to this condition so that it will not monopolise the available bandwidth.  Loss rate. The use of equation (1) in controlling the sender’s rate warrants that at loss rate higher than 5%, a rate-based control protocol like RAP will still be TCP-friendly. RAP, however does not explicitly use (1) to control its rate. In these set of experiments, we investigate the effect of increasing loss rate on the performance of these protocols.

 TCP with Selective Acknowledgement (SACK) [6]. TCP Sack is designed to overcome the problem of poor performance when multiple packets are lost from one window of data. It allows the TCP sender to intelligently transmit only those segments that have been lost. In addition, it decouples the determination of which segment to transmit from the decision about when it is safe to re-send a packet. The retransmission behaviour of

Evaluation of TF protocol behaviour includes comparing the average bandwidth share obtained by both protocols when competing with TCP, respectively under different scenarios. For effective evaluation, we carefully choose the TCP implementation flavors that run with each of the ratebased protocol as competing flow, as recommen5

TCP Sack may affect the way TF protocols adjust their transmission rate which in turn affect the friendliness metric. We shall see the results of these experimentations.

TFsn TFrn • • •

10Mbps; 6 ms

• • •

TFs0 TFr0 Router

Bottleneck link 1.5Mbps; 20 ms

Router

TCPsn TCPrn • • •

3.3 Simulation Scenario

10Mbps; 6 ms

• • •

TCPs0 TCPr0

We have used the simulator ns-2 [1] from the VINT project at U.C. Berkeley/LBNL to perform our simulation experiments. The simulator is event-driven and runs in a non-real-time fashion. In ns-2, the user defines arbitrary network topologies which composed of nodes, routers, links and shared media. A rich set of protocol objects can then be attached to nodes, usually as agents. For this simulation, we use several flavors of TCP (Reno and Sack), as well as the simulated TF protocols. Correspondingly, the user may choose between various types of applications. In this simulation, we use the FTP application for the TCP agent and application using a constant bit rate (CBR) traffic pattern, which use the UDP transport protocol. We configure the routers using DropTail (First-In-First-Out/FIFO) and Random Early Detection (RED) policies. Packet losses are simulated by packet drops at overflowed router buffer. Figure 1 illustrates the simulated scenario used in this simulation. We use 2 sets of (n+1) competing sources, where 0n1. One set of these sources act as TF rate-based sources (either RAP or TFRC) transmitting TF traffic into the network, and another set running as TCP agents with TCP Reno or Sack. The TF source TFsn sends data to (and receives acknowledgments from) the receiver/sink agent TFr n . Similarly, TCP source TCPsn sends data to (and receives acknowledgments from) TCPrn receiver agent.

Figure 1: Simulation topology ms. The traffic from both sources are sharing this bottleneck bandwidth. The TCP flow is an FTP session and has unlimited data to send. Side links connecting to the bottleneck link have bandwidth of 10 Mbps with 6ms delay factor. The routers have a single output queue for each attached link, and initially uses RED scheduling discipline. The second round of the simulation uses DropTail/FIFO scheduling as an alternative to RED. All simulations were run long enough until they achieved steady state behaviour. Table 2 summarises important simulation parameters used in these experiments.

Parameter

Default Value

Packet size Bottleneck bandwidth Bottleneck delay Sidelink bandwidth ACK size TCP timer granularity Simulation length

100 bytes 1.5 Mbps 20 ms 10 Mbps 40 bytes 100 ms 100 sec

Table 2: Simulation parameters

3.4 Simulation Parameters

4

A fair comparison can only be achieved with careful selection of simulation parameters. We have used similar parameter values for all the flows wherever possible. The intermediate routers are connected by a bottleneck link with initial bandwidth set to 1.5 Mbps and initial link delay of 20

Results of Experiments

We have conducted a substantial number of experiments to evaluate the TCP-friendliness of the TF protocols. The following subsections present our results for these different experiments. 6

4.1 Router Policies (FIFO versus RED) Using DropTail/First-In-First Out (FIFO) Router

In these set of experiments, we compare the performance of TF protocols when using FIFO policy at the intermediate routers. We believe that this comparison is important since FIFO is widely used and implemented in the routers [3]. Table 3 presents the result. It is obvious from these results that TF protocols are considerably unfriendly in all the experiments using FIFO as routers. Nevertheless, TF protocols are more friendly with TCP Sack as compared to TCP Reno. TCP Sack obtains more bandwidth share in all the cases which therefore lead to higher throughput. These result can be attributed to the feature of TCP Sack itself. In TCP Sack the receiving agents send back selective ACK packets to the sender informing the sender of data that has been received. The sender can then retransmit only the missing data segments. As a result, a TCP Sack sender can obtain slightly more bandwidth for its flow. This is not the case with TCP Reno. With the normal type of acknowledgments, a TCP Reno sender can only know about a single lost packet per round trip time. This issue is important since RAP and TFRC both uses packet loss as an indicator for deciding their transmission rates. We also observed that the fine grain adaptation in RAP (FG RAP) does not facilitate the protocol in obtaining more throughput when coexisting with TCP Reno. However when coexisting with Sack, the friendliness of FG RAP is slightly improved relatively indicating that TCP flows receive higher throughput. We can say that the use of feedback signal based on the ratio of the short-term exponential moving average of RTT to a long-term exponential moving average of RTT is useless when the coexisting TCP flow does not use selective ACK. Figure 2 illustrates CG RAP and TFRC performance in terms of bandwidth share when coexisting with Reno. Notice that with the increa-

Average Fr

Comment

TFRC vs. Reno CG RAP vs. Reno FG RAP vs. Reno TFRC vs. Sack CG RAP vs. Sack FG RAP vs. Sack

2.31 6.46 8.64 1.78 4.79 4.45

Unsatisfactory Very Poor Very Poor Unsatisfactory Very Poor Very Poor

Table 3: Friendliness of TF flows coexisting with TCP with FIFO router

sing number of flows, the TCP Reno obtain lesser bandwidth share in the case of CG RAP compared to that of in the case of TFRC in which 50 TCP Reno flows are still capable of securing about 1% of the bottleneck bandwidth. Thus, we can conclude that TFRC is better protocol compared to RAP when coexisting with TCP Reno using FIFO router. This result is relevant to the situation at present where TCP Reno is predominantly used in the Internet [22]. TFRC v s . R e n o w i t h F IF O

C G R A P v s R e n o w it h F I F O 8

9 8

7

BW Share (%)

CRA P % 7

R eno %

6 5 4 3

BW Share (%)

4.1.1

Experiment

TFR C %

6

R e no %

5 4 3 2

2

1

1

0

0 20

40

60

T ota l Flows

80

100

20

40

60

80

100

T o t a l F lo w s

Figure 2: Bandwidth share of TF flows coexisting with TCP flows using FIFO router

4.1.2

Using Random Early Detect (RED) Router

In this set of experiments, we evaluate the performance of TF protocols versus various TCP Reno and Sack with RED as router policy. Table 4 shows the results of these experiments. When using RED policy at intermediate routers, the TF protocols becomes more friendly towards competing TCP Reno flows. Note that the friendliness index for TFRC is less than 1.00 indicating that the average throughput of TCP Reno flows is slightly higher than that of the TFRC. The higher throughput can possibly result from the ability of TCP Reno to acquire more bandwidth share. 7

Average Fr

Experiment TFRC vs. Reno CG RAP vs. Reno FG RAP vs. Reno TFRC vs. Sack CG RAP vs. Sack FG RAP vs. Sack

Table 4: Friendliness

Comment

sion rate to secure better bandwidth share. In the case of RAP versus TCP Sack, the performance 0.77 Satisfactory of CG RAP is slightly better with RED. However, 3.58 Poor the use of RED causes FG RAP being unable to 3.92 Poor properly capture the short-term trends in conges0.45 Unsatisfactory tion under heavy load. Consequently, FG RAP is 3.95 Poor not able to respond to the distress of TCP Sack, 6.32 Very Poor making it more aggressive in seizing bandwidth of TF flows coexisting with TCP which leads to its higher throughput.

with RED router

4.1.3

Figure 3(a) supports our assumption. Figure 3(b) shows the bandwidth trace of one TFRC flow and one TCP Reno flow for a period of 100ms. It reveals that the bottleneck bandwidth is fairly shared at around 50%. One possible explanation is that the ability of RED routers to absorb burstiness of TCP Reno making it more aggressive in obtaining the bandwidth. Also, The performance of RAP against TCP Reno is also improved when using RED routers even though the results are still considerably unfavourable. 80.00

RAP and TFRC Friendliness

Reno

75.00

14 13 12 11 io t a 10 R 9 s 8 s e 7 n li 6 d n 5 e i 4 r F 3 2 1 0

70.00

TFRC %

65.00

R eno %

60.00

BW Share (%)

4 .5

BW Share (%)

In summary, performance of TFRC protocol is better compared to RAP regardless of the router policy implemented at the intermediate routers. However, the use of RED makes TFRC become more friendly. When using FIFO routers, RAP works better with TCP Sack compared to TCP Reno but still far unsurpassed compared to TFRC. Figure 4 presents overall results of TF protocol friendliness when coexisting with TCP Reno.

TFRC

TFRC vs. Reno with RED 5 .5 5

4 3 .5 3 2 .5 2

55.00 50.00 45.00 40.00 35.00 30.00 25.00

1 .5

20.00

1

15.00

0 .5

10.00 5.00

0 20

40

60

80

100

0.00 0.00

T o ta l F lo w s

(a)

FIFO and RED Routers: Summary of Results

20.00

40.00

60.00

80.00

100.00

FRAPvReno/FIFO TFRCvReno/FIFO CRAPvReno/FIFO CRAPvReno/RED FRAPvReno/RED TFRCvReno/RED

20

Time (s)

40

60

80

100

Total Flows

(b)

Figure 3: Bandwidth share of TF flows coexisting with Figure 4: Summary of TF protocol friendliness when coTCP flows using RED router

existing with TCP Reno

The friendliness of the TF protocols is generally better when coexisting with TCP Sack compared to that of with the TCP Reno, except for the case of FG RAP versus TCP Sack. The average throughput of TFRC flows is slightly lower indicating that the TCP Sack flows are dominating the bandwidth throughout the connection (at around 60%). This can possibly attributed to the aggressiveness of TCP Sack due to its abovementioned retransmission feature as well as the ability of RED to absorb TCP burstiness. Due to this reason, TFRC senders cannot increase the transmis-

We have so far seen that the TFRC and CG RAP coexisting with TCP Reno as the TCP friendliest among all the experiments. For the rest of this paper, we shall refer to CG RAP as RAP, TCP Reno as simply TCP otherwise explicitly stated. We also use RED as default router policy in the subsequent experiments otherwise explicitly stated.

4.2 Bottleneck Delay The effects of changing bottleneck delay on friendliness ratio of RAP and TFRC are clearly 8

shown in Table 5. The performance of TF protocols is not improved by decreasing bottleneck delay and indeed becoming worse. At the bottleneck delay of 20 milliseconds, the TFRC flows performs satisfactorily against its TCP counterpart. At the same delay, the RAP flow is also performing quite satisfactorily where both of their average throughput are higher than that of the TCP Reno. However, as the bottleneck delay is decreased to 10 milliseconds, the average throughput for RAP flows increase more than 10% thus making them more unfriendly with TCP Reno. TCP flows also become more aggressive and cause TFRC flows being unable to obtain a favourable bandwidth share. Hence TFRC flows obtain less average throughput compared to TCP. The situation can be simply explained. A smaller bottleneck delay indicates a smaller round-trip time. Both TCP and TF connections become more aggressive and achieve larger share of bandwidth with shorter RTT. It is also obvious that a TCP connection with shorter delay can update their window sizes faster than those with longer delays, thus capture higher bandwidth. This somehow affects the friendliness of the TF protocols.

With the increase in the bottleneck bandwidth, TCP flow becomes more stable and conforms to AIMD, possibly due to lesser losses (packet drops) at the router. Consequently, this makes TF flows become more adaptive and thus more TCP friendly. Again this behaviour is in agreement with the equation (1) with regard to the loss probability. Experiment

1.5 Mbps

3.0 Mbps

RAP TFRC

1.84 1.21

1.40 1.13

Table 6: Effect of Increasing Bottleneck Bandwidth on TCP-Friendliness

4.4 Loss Rate

In this set of experiments, we test the effect of increasing loss rate on the TF protocols. As mentioned in Section 3.1, the use of equation (1) in controlling the sender’s rate guarantees that at loss rate of up to 5%, a rate-based control protocol using the equation will still be TCP-friendly. Table 7 shows the result of our simulation. The result clearly shows that RAP which does not explicitly use (1) to control its rate, begins to perform unExperiment 20 ms 10 ms satisfactorily at loss rate increase of 5%. With the same increase, TFRC is still very TCP friendly. RAP 1.84 2.88 However, TFRC was unable to compete with the TFRC 1.21 0.37 aggressiveness of TCP at loss rate higher than 5%. Table 5: Effect of Reducing Bottleneck Delay on TCP- This situation has earlier been verified by simulaFriendliness tions and clearly explained in [14].

4.3 Bottleneck Bandwidth The results of changing bottleneck bandwidth are displayed in Table 6. In this experiment, the bottleneck delay is kept constant. When the bottleneck bandwidth is doubled, the TF flows become more TCP friendly. At a smaller bandwidth, TCP losses become more dominant (as packets are dropped at the bottleneck router), thus caused TCP to deviate from the AIMD algorithm which in turn making TF flows getting bigger share of the bandwidth.

Experiment

1%

5%

15%

RAP TFRC

1.28 0.95

1.79 1.02

4.58 0.47

Table 7: Effect of Increasing Loss Rate on RAP and TFRC

5

Conclusions

In this paper, we have presented a performance comparison of two TF rate-base adaptation protocols, namely RAP and TFRC. These protocols achieve friendliness by changing their sending rate 9

based on feedback signal from receiver. Our com- [4] R. Braden, L. Zhang, S. Berson, S. Herzog, and S. Jamin. Resource Reservation Protocol (RSVP) Verparison reveals that both protocols are able to sion 1 Functional Specification. RFC2205, September achieve throughput that are close to the through1997. put of a TCP connection travelling over the same network path. However, our experimental results [5] D. Clark, J. Crowcroft, B. Davie, S. Deering, D. Estrin, S. Floyd, V. Jacobson, G. Minshall, C. Partridge, L. Peshow that the equation-based TFRC protocol is terson, K. Ramakrishnan, S. Shenker, J. Wroclawski, more friendly and robust in most of our experiand L. Zhang. Recommendations on Queue Managements, as compared to RAP. ment and Congestion Avoidance in the Internet. ROur experimental results show that both proFC2309, April 1998. tocols perform better in the presence of RED as [6] S. Floyd and K. Fall. Simulation-based Comparisons router. This is mainly due to the ability of RED of Tahoe, Reno, and SACK TCP. Computer Commurouter to evenly distribute losses across the flows nication Review, 26(3):5–21, July 1996. and avoid buffer overflow over a wide range. We also demonstrate the effect of varying bottleneck [7] S. Floyd and K. Fall. Promoting the Use of End-toEnd Congestion Control in the Internet. IEEE/ACM delay and bottleneck link bandwidth on friendliTranscactions on Networking, May 1999. ness of TF protocols. The results also show that these protocols especially perform better at lower [8] S. Floyd, M. Handley, J. Padhye, and J. Widmer. Equation-Based Congestion Control for Unicast Aploss rate below 5%. Both protocols are no longer plications. Techical Report TR-00-003, International friendly at the rate higher than 5%. Computer Science Institute, March 2000. Our main goal for these experiments is to compare these two protocols and to find which is best [9] S. Floyd and V. Jacobson. Random Early Detection Gateways for Congestion Avoidance. IEEE/ACM rate-based protocol to support our future work in Transactions on Networking, 1:397–413, August end-to-end quality of service support for strea1993. ming multimedia applications in the Internet. We [10] M. Handley, J. Padhye, and S. Floyd. have reached the verdict that the TFRC is the best TCP-Friendly Congestion Control Protocol. between the two. http://www.aciri.org/floyd/friendly.html, August 1999.

6

[11] S. Jacobs and A. Eleftheriadis. Streaming Video using TCP Flow Control and Dynamic Rate Shaping. Journal of Visual Communication and Image Representation, Special Issue on Image Technology for Word-Wide-Web Applications, 9(3):211–222, September 1998.

Acknowledgements

The authors wish to thank Karim Djemame, Riri Fitri Sari and Somnuk Puangpronpitag for their useful discussions and contributions during the [12] M. Kara and M.A. Rahin. Comparison of Service Dispreparation of this paper. ciplines in ATM Networks Using a Unified QoS Metric. In Proceedings of the 16th International Teletraffic Congress (ITC-16), volume 3B of Teletraffic Science and Engineering Series, pages 1137–1146, Edinburgh, UK, June 1999. Elsevier Publishers.

References [1] ns (Network Simulator), 1999. http://www.mash.cs.berkeley.edu/ns/.

URL

[13] K. Kim and K. Nahrstedt. QoS Translation and Admission Control for MPEG Video. In A. Campbell [2] M. Allman and A. Falk. On the Effective Evaluation of and K. Nahrstedt, editors, Proc. of 5th. International TCP. ACM Computer Communication Review, 29(5), Workshop on Quality of Service (IWQoS’97), New YOctober 1999. ork, May 1997. [3] S. Bhattacharjee. Active Networks: Architectures, Composition, and Applications. PhD thesis, Georgia Institute of Technology, 1999.

[14] J. Mahdavi and S. Floyd. TCP-Friendly Unicast Rate-Based Flow Control. Technical note sent to the end2end-interest mailing list, January 1997.

10

[15] M. Mathis, J. Semke, J. Mahdavi, and T. Ott. The Macroscopic Behavior of the TCP Congestion Avoidance Algorithm. ACM Computer Communication Review, 27:67–82, July 1997. [16] K. Nichols, S. Blake, F. Baker, and D. Black. Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers. RFC 2474, December 1998. ftp://ftp.isi.edu/in-notes/rfc2474.txt. [17] J. Padhye. Towards a Comprehensive Congestion Control Framework for Continuous Media Flows in Best Effort Networks. PhD thesis, University of Massachusetts Amherst, March 2000. [18] J. Padhye, V. Firoiu, D. Towsley, and J. Kurose. Modeling TCP Throughput: A Simple Model and its Empirical Validation. ACM Computer Communication Review, 28:303–314, September 1998. [19] J. Padhye, J. Kurose, D. Towsley, and R. Koodli. A Model Based TCP-Friendly Rate Control Protocol. In Proceedings of IEEE NOSSDAV’99, Basking Ridge, New Jersey, June 1999. [20] R. Rejaie, M. Handely, and D. Estrin. RAP: An Endto-end Rate-based Congestion Control Mechanism for Realtime Streams in the Internet . In Proceedings of IEEE Infocom’99, New York, NY, March 1999. http://netweb.usc.edu/reza/pub.html. [21] D. Sisalem and H. Schulzrinne. The Loss-Delay Adjustment Algorithm: A TCP-friendly Adaptation Scheme. In Proc. of International Workshop on Network and Operating System Support for Digital Audio and Video (NOSSDAV), Cambridge, England, July 1998. [22] W. Stevens. TCP Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery Algorithms. RFC 2001, January 1997. http://src.doc.ic.ac.uk/computing/internet/rfc/rfc2001.txt. [23] H. Zhang and S. Keshav. Comparison of Rate-based Service Disciplines. In Proceedings of ACM SIGCOMM’91, Zurich, 1991.

11