Wave & Wait Protocol (WWP): High Throughput

0 downloads 0 Views 210KB Size Report
Wave & Wait Protocol (WWP):. High Throughput & Low Energy for Mobile IP-Devices. V. Tsaoussidis, A Lahanas, H. Badr. Department of Computer Science.
Wave & Wait Protocol (WWP): High Throughput & Low Energy for Mobile IP-Devices. V. Tsaoussidis, A Lahanas, H. Badr Department of Computer Science State University of New York at Stony Brook New York 11794-4400 Abstract The work we present here reports on the further development and testing of an experimental transport-level protocol. The protocol runs on top of IP, and is intended for mobile, wireless devices. Its central concern is to conserve battery-powered energy used for transmission while maintaining high levels of data throughput. It can be adjusted to achieve higher throughput at the expense of lesser energy-saving in accordance with the application's needs. It conserves transmission effort, adjusting data transmission below perceived network congestion level, and thereby minimizes the need for duplicate data retransmission due to congested routers losing packets, and so on. It achieves high throughput by implementing a network probing mechanism that enables it to investigate instances in which prevailing throughput conditions appear to be deteriorating, before committing to further data transmission. The probing mechanism also permits it to continuously monitor the network during high congestion periods in order to locate windows of sufficiently low congestion opportunity to exploit for transmission. 1. Introduction Mobile communication devices are increasingly dominating today's market. Many applications are being developed for mobile stations, which require very specific protocol support depending on the characteristics of both application (e.g., e-mail, web, multimedia) and mobile station (e.g., laptop, phone, handheld etc.). For the environments in which such stations operate, throughput is not the sole dominant performance criterion. Energy-saving becomes a crucial factor as well. The protocol presented here is designed for such battery-powered devices.

Page

Energy-saving protocols at the transport level can take advantage of a novel approach, also used by other application-specific protocols: tradeoffs between different characteristics of QoS, and device- or application-specific characteristics. Familiar examples are compression mechanisms that trade off application processing time for bandwidth, and Forward Error Correction (FEC), used in high-speed networks, which trades off bandwidth for reliability. Retransmission-based error correction - in contrast to, for example, FEC - is an appropriate approach for bandwidth-limited communications (e.g. low frequency communications with handhelds). It becomes a necessary mechanism for reliable protocols, especially in the case of unreliable networks (e.g. IP networks). Another example of transport protocols that exhibit tradeoff characteristics is AOTP [10], which trades reliability for speed using a receiverbased approach to decide whether or not retransmission is required in order to conform to the user-prescribed QoS level. Similar approaches, but centered on sender-based transport services, are described in [2, 3, 4, 5, 6]. 3COM has recently designed a set of protocols to support messaging over TCP/IP for handheld devices using Palm OS [13]. The protocol attempts to save energy by limiting the transmitted data: it transmits 500 bytes during each communication phase under the assumption that the data of interest to the mobile user will be included within this limit. A user wishing to receive more data re-establishes communication for the next 500 bytes. The protocol design is fairly simple but practical: it trades off the amount of transmitted data in order to save battery power, and it does so at the expense of information. The approach takes into account the mobile user's limitation of time as well as the fact that newer

1

pieces of information are normally more significant than older ones. The Wave & Wait Protocol (WWP) discussed here is an experimental transport level protocol that uses energy-saving versus time as the integral trade-off. Energy-saving and time are two resources of varying significance depending on the application and circumstances. The protocol can be calibrated with respect to the amount of gain to be achieved in the one at the expense of the other, in response to application requirements, overall network environment, and so on. The test results we present show that it is capable of achieving significantly higher throughput than TCP with less energy expenditure. 2. Protocol Description The work we present reports on the further development and testing of WWP. An early version of the protocol is described in [11]. WWP runs on top of IP, and is intended for mobile, wireless devices. Its central concern is to conserve battery-powered energy used for transmission while maintaining high levels of data throughput. It can be adjusted to achieve higher throughput at the expense of lesser energy-saving in accordance with the application's needs. It conserves transmission effort, adjusting data transmission below perceived network congestion level, and thereby minimizes the need for duplicate data retransmission due to congested routers losing packets, and so on. It achieves high throughput by implementing a network probing mechanism that enables it to investigate instances in which prevailing throughput conditions appear to be deteriorating, before committing to further data transmission. The probing mechanism also permits it to continuously monitor the network during high congestion periods in order to detect and exploit windows of opportunity of improved conditions during which aggressive transmission of data can be successfully undertaken as conditions alter. A way to save energy is to avoid retransmissions, unnecessary headers, and redundant data. The less time expended on transmissions, the better the energy saving. For example, transmission of packets over a congested network will cause packets to be dropped and, consequently, a reliable protocol will initiate a retransmission mechanism. Instead, our protocol first "probes" the network to estimate prevailing Page

levels of congestion risk and adjusts its transmission accordingly. It transmits aggressively when existing conditions appear to be favorable, and backs off as congestion is detected, thereby attempting to take maximum advantage of favorable conditions without wasting energy on transmissions that are unlikely to be successful as conditions deteriorate. 2.1 Protocol Overview In this section, we present a broad overview of the protocol. Details may be followed up on in [11], though it should be noted that, since the protocol is in constant evolution, not each and every detail presented there corresponds exactly to the version of the protocol we report on in this paper. Connection is first established using a sixway handshake. Apart from connection set-up, this six-way exchange is also used to determine current congestion conditions in the network. Using the established connection, a sender sends a "wave" to the receiver consisting of a number of fixed-sized data segments, and then waits for a response. The number of segments in a wave is set according to the current "wave level" which is determined by the receiver in line with the estimated prevailing congestion level, and is communicated to the sender. The less the perceived congestion risk in the network, the higher the "wave level" and the more segments that a wave comprises. The receiver uses the segments of an arriving wave to estimate network congestion, then sends just one Negative Selective ACK (N-S_ACK) for the entire wave, which also specifies the level for the next wave. The NS_ACK is a NACK that identifies all lost segments that the receiver has not received up to that point. The sender has to retransmit these as part of the next wave, together with new segments, within the limitation of the wave size determined by the new wave level. This next wave starts with those segments that need retransmission, which are counted as part of this next wave. Only after these have been sent will the sender, wave size permitting, continue with new segments. If the receiver's N-S_ACK specifies zero as the next wave level, then this means that it deems the network to be too congested for energy-conserving transmission to be worthwhile at present (i.e., the receiver deems that too many segments might be lost, 2

necessitating too many retransmissions). The sender would then periodically probe the receiver. These "probe cycles" are used by the receiver to continuously monitor the network's congestion level. When conditions improve sufficiently, the sender resumes transmission at some appropriate wave level specified by the receiver. Two points are worth highlighting in the description above. Firstly, when the sender transmits the last segment of a wave, it sets a timeout SEND_T. If it does not receive the corresponding N-S_ACK for the wave within SEND_T, it initiates a probe cycle. This situation would occur if the N-S_ACK were lost or delayed, for example, or if it were never sent in the first place because the receiver is still waiting for missing segments from the wave to arrive. Secondly, once a probe cycle is initiated, it acts as a sort of "check point" for both sender and receiver. For example, were a delayed N-S_ACK for the wave to arrive at the sender after the probe cycle is initiated, it would be ignored: the sender will proceed with further wave transmissions only in response to the results of the probe cycle (see Subsection 2.3 below). Similarly, if the receiver had not yet sent the N-S_ACK for the wave when it detects that a probe cycle has been initiated by the sender, it will proceed to respond to the probe cycle (which will eventually result in the receiver sending a different, distinct N-S_ACK at the end of the cycle), and will not attempt to send an NS_ACK for the wave per se, even if all the missing segments of the wave were to subsequently arrive during the progress of the probe cycle. Connection can be terminated at the initiative of either side, and involves a separate two-way handshake by each side as in TCP. The protocol provides a reliable, connection-oriented end-to-end service: on the receiver side, segments are delivered to the higher-level protocol in order, with no duplicates, and with no segments missing. At the core of the protocol's throughput and energy-saving performance is its ability to monitor network conditions and rapidly adjust its transmission strategy as these conditions continuously change for the better or worse. Two mechanisms are central to providing the protocol with this capability. The transmission of a wave is used to monitor current network performance, and the next wave level is set in conformity with what is thereby detected about prevailing conditions. Page

Secondly, probe cycles constitute an "energyconserving" mechanism for investigating instances in which good prevailing conditions appear to be deteriorating, before committing to further data transmission, and to continuously monitor the network during high congestion periods in order to detect and exploit windows of opportunity of improved conditions during which aggressive transmission of data can be successfully undertaken as conditions alter. Probe cycles are "energy conserving" in contrast to the energy that would otherwise be expended on the transmission of data segments that do not have a good chance of getting through during high congestion periods. Because of the centrality of waves and probing to the protocol's innovative approach, we proceed to present each of these in turn in some detail. Details on other aspects of the protocol (e.g., connection establishment, segment formats and types, etc.), can be found in [11]. 2.2 Wave Mechanism The protocol first groups data segments into waves on the sending side and then transmits the segments of a wave one after the other, rather than simply sending separate segments individually when it can. The reason is that, in order for the receiver to effectively estimate network congestion based on the successive segments reaching it, it needs some knowledge about the sender's pattern of transmission of these segments. This knowledge is (implicitly) provided by the fact that waves at a given level are made up of a predetermined, fixed number of data segments of fixed size, and the sender transmits the segments of the wave one after the other with no pause between one segment and the next. While data segments are of fixed size, in any given implementation of the protocol the segment size can be set so as to optimize the average number of bytes that need retransmission, in line with the network's overall characteristics of burst errors, and so on. Similarly, the number of wave levels, and the fixed number of segments comprising each wave level, can also be set with an eye to the application's message sizes, as well as the protocol's own internal need for wave "granularity" matching the network's range of congestion behavior (i.e. small waves containing few segments for transmission under significant congestion, through to large waves containing

3

many segments in order to exploit opportunities when congestion is low). The receiver attempts to estimate prevailing congestion conditions by monitoring the throughput of the current wave and setting the level of the next wave accordingly. A wave at level i (i>=0) is composed of a fixed number W(i) of data segments. For i=0, W(0) is defined to be 0. A data segment is composed of a 6-byte header and a fixed-sized data payload [11]. Once the first segment to reach the receiver from a new wave arrives, it is easy for the receiver, given the current wave level i (which is carried in the segment header), to calculate how long it would take the rest of the wave to reach it if the network were relatively uncongested, using a "baseline" throughput of BT KBytes per second for the uncongested network. The time thus calculated is the "baseline time". The receiver measures how long it actually takes for the remaining segments in the wave to arrive. It then uses the baseline and measured times for the wave to set the level of the next wave. The baseline throughput BT is a protocol parameter whose value can be determined empirically for the network on which the protocol is running, so as to maximize protocol performance in line with the application's throughput needs and the amount of energysaving that it is willing to trade off in return for higher throughput. BT would typically be set at some fraction of the (average) maximum effective throughput the network is capable of under congestion-free conditions. The exact value would, in general, depend on how stable general network conditions are, as well as on the inherent throughput capabilities of the network. The closer the value set for BT is to the network's maximum effective throughput, the less aggressive the protocol will be in attempting transmissions, since it will attempt transmissions at higher wave levels only when current network throughput gets close to the maximum possible, and there will be a strong bias towards selecting lower wave levels, including wave level zero, as the current throughput falls below that maximum. In a relatively stable and uncongested network environment, it would probably pay off to be somewhat more aggressive, and so BT should be set lower. In a relatively congested network with rapidly varying conditions, BT should be set higher to make the protocol's behavior more Page

conservative and less aggressive. However, a wide range of wave levels would provide more flexibility than a single parameter such as BT could, by itself, be capable of providing, permitting the protocol to automatically adjust its behavior across a broader variety of network environments, as well as to accommodate more variability in the operational characteristics of any single environment. Our design currently calls for four wave levels, i=0, 1, 2, 3. In the implementation of this design we set the number of segments in a wave W(i) = (12 x i) for i=0, 1, 2, 3; fixed-sized segment payload was set to 1KByte. The following simple algorithm was used by the receiver to set the next wave level: Suppose the current wave is at level i, i = 1, 2, 3. ________________________________________ 











































































 









 



























































"



#

+





,







-























!









$





%

&

'

(

%

)

* 







(



.

/



































0

&

%



1





-



2

3

-









4















5





________________________________________ The algorithm above essentially implies that the when the receiver sets the new wave level to k, k = 1, 2, 3 , it is estimating the current network throughput to be no worse than approximately a fraction 1/(4-k) of the baseline throughput value of BT KBytes per second (and no better than approximately a fraction 1/(4-k), for k = 1 or 2). The number of segments in the new wave is then adjusted proportionately. If the throughput appears to be less than 1/3 of the baseline throughput, we go to level 0, deeming it better to pause for a while than risk expending energy transmitting even a small wave that might not have a sufficiently good chance of getting through undamaged. In the event the receiver sets the next wave level at 0, the sender will immediately probe the receiver. The receiver uses the RTTs (Round Trip Times) measured during the probe cycle to set the new wave level in the N-S_ACK segment

4

it sends to the sender at the end of the cycle (see Subsection 2.3 below). The sender does not start transmitting segments until it has a sufficient number to make up a complete wave. When it receives a NS_ACK setting the wave level, and if it does not have sufficient old (needing retransmission) and new data up to the specified number of segments in the wave, it will transmit the segments it has at the highest wave level for which it has enough segments. Further details can be found in [11]. 2.3 Probing Mechanism The wave mechanism described above enables the receiver to estimate the current level of congestion and the associated delay for data segment delivery. This information is implicitly and indirectly summarized in the level selected for the next wave. However, an error that causes data segments from a wave to be lost might be an indication of either a purely transitory problem on the transmission line (such as a burst error, for example), or of deteriorating congestion conditions which are likely to be of longer duration. It should be noted before proceeding further that the probing mechanism described here is significantly different from that reported on in [11]. A probe cycle enables the receiver to measure two successive RTTs from the network, thereby providing it with sufficient information to determine whether the error was purely transitory or an indication of longer-lasting congestion build up. Operating at even high wave levels, and pausing to check with probes in the event of unduly delayed, and possibly lost, data segments, the receiver can decide whether to have data transmissions continue at an appropriately high wave level, adjust the wave level downwards, or even temporarily stop data transmission altogether. The approach takes full advantage of the fact that almost-current congestion conditions have already been estimated by the receiver, and are summarized by the current wave level. The receiver uses the absolute values of the two RTTs measured from the probe cycle, as well as the difference between these two RTTs, to set the next wave level in terms of an incremental change from the current level. The new wave level is signaled to the sender by means of a N-S_ACK segment that terminates the probe cycle.

Page

A probe cycle starts when the N-S_ACK segment for a data wave just transmitted goes absent (see figure 1). This could indicate incomplete delivery of (or undue delay of some segments from) the wave, or the delay/loss of the wave's corresponding N-S_ACK. The SEND_T timer at the sender side expires and a PROBE1 segment is transmitted. The receiver responds with a PR1_ACK, upon receipt of which the sender transmits a PROBE2. The receiver acknowledges this second probe with a PR2_ACK and enters a state where it waits for a PROBE3. It makes an RTT measurement based on the time delay between sending the PR1_ACK and receiving the PROBE2. Upon receipt of PROBE3 it makes the second RTT measurement based on the time delay between the PR2_ACK and the PROBE3. The receiver then determines the level for the next wave and informs the sender by means of a N-S_ACK. In setting the next wave level, the receiver takes into account the network conditions that had been detected at the time the current wave level was determined, as well as the values of the two RTT measurements and the delay variation (jitter) between them. Other applications with bursty flows that are currently sharing the same links in the network that our application is being routed along will induce measurable jitter between the two RTTs. Contrariwise, an error free environment, or links that are being shared with normalized/smoothedout data flows, will induce no jitter. The receiver can take all this into consideration in setting the level for the next wave. The full set of decisionmaking rules has not yet been completely standardized, but a set of rules has been developed, implemented, calibrated and used for our testing environment. In the event that the PROBE1 or its acknowledgement is lost, the sender, using a PROBE_T timeout, retransmits the PROBE1. The receiver reinitializes its measurement timer upon the receipt of the retransmitted PROBE1 in order to take the correct RTT measurement. Distinct sequence numbers are used to distinguish between different instances of multiply-retransmitted PROBE1 segments. A PR1_ACK carries the same sequence number as the corresponding PROBE1 instance that it is acknowledging. This sequence number is echoed back in the corresponding PROBE2. A similar process takes place with respect to: (i) using PROBE_T timeouts on the 5

sender side for retransmissions of PROBE2 and PROBE3 segments in the event that either or both, or their corresponding PR2_ACKs and NS_ACKs, are lost; and (ii) use of distinct sequence numbers for retransmissions, which are echoed back and forth in the various exchanges between sender and receiver so that associated segments can be correctly identified and paired off, and RTTs correctly measured. The receiver moves to the ESTAB state after sending the N-S_ACK that should terminate the probe cycle. In this state, and should the N-S_ACK be lost, the receiver would receive - instead of data segments from the next wave, or a PROBE1 initiating the next probe cycle if the receiver had specified wave level 0 in the N-S_ACK – it would receive a retransmitted PROBE3. The N-S_ACK was lost (or excessively delayed), and the sender, which is in state WAIT_N-S_ACK, timed out on the PROBE_T timer and resent the PROBE3. The receiver would then retransmit the N-S_ACK, echoing back the sequence number of the retransmitted PROBE3. Although probing is a fairly complicated mechanism and adds additional RTTs to the protocol’s progress, it proves to be a more useful device than would be sending data that is likely to be dropped, on the one hand; or reducing the window size (i.e., reducing the wave level) and degrading the connection throughput, possibly for no good reason, on the other. The first option would negatively impact energy expenditure. The second would needlessly degrade the effective throughput and also, by unnecessarily prolonging the connection time, also impact energy consumption. 3. Implementation & Testing The protocol was implemented using the x-kernel protocol framework [8, 12]. The highlevel test protocol sends messages of 1024 bytes to the underlying WWP layer. These are then buffered until there are enough segments to form a wave at the level needed.

Page

The sender's buffer was set to 40 segments. Semaphore-based flow control was implemented between WWP and the test protocol at the sending side, so that the latter does not try to push new segments into a full buffer. The receiver's buffer was set to 256 segments so that it will not be forced to unnecessarily drop incoming segments during its selective-repeat mode of operation. 3.1 Testing Environment and Methodology. We ran tests simulating a fairly low bandwidth environment. The tests were carried out in a single session, with both client and server running on one and the same host, so as to avoid unpredictable conditions with distorting effects on the protocol's performance. Congestion was simulated by dropping and delaying segments using modified x-kernel protocols. Our modified x-kernel protocol, which we call VDELDROP, drops segments at a constant rate specified for the duration of a test, and causes different delays for each segment. VDELDROP also has the capability of alternating On/Off phases during which its actions are in effect and are suspended, respectively. Thus, during a connection period, WWP would experience phases that are error free and others with simulated congestion error effects. This modification enabled us to test WWP's behavior in response to sudden changes in the simulated environment, and its ability to rapidly re-adapt to varying congestion conditions. Such conditions are typical of mobile networks where the user is "on the move": communication with the access points will have variable characteristics during the connection time. VDELDROP was configured above IP, with WWP configured on top of it, and our high-level testing ("application") protocol configured above WWP. We also ran TCP under a similar configuration. We compare our protocol with TCP (Reno) since TCP is a reliable protocol with endto-end service similar to WWP. It is also a topic of current research interest with respect to its behavior in wireless environments. However, TCP does not distinguish well between congestion, on the one hand, and transient transmission burst errors, on the other, although each requires distinct actions in response to its occurrence (e.g., slow down, and continue feeding the network, respectively). TCP's optimization capability is well known (e.g., [1], [10]), and implementation 6

problems can be predicted or avoided altogether [7], making it a good standard of comparison. Note that our WWP implementation makes no comprehensive attempt to calibrate the various protocol parameters (data segment size, number of segments per wave at each wave level, baseline throughput BT, etc.) for optimal performance with respect to the overall characteristics of the protocol's operational environment. The data segment payload was 1KByte; W(i) = 12 X i, i = 0,1,2,3 ; BT was set at 40 KByte/second (which is about 32% of the best effective throughput achieved under error-free conditions). All this probably causes WWP to understate its potential somewhat, though some effort was put into calibrating the sender timeout values SEND_T and PROBE_T, and the decisionmaking process by which the next wave level was determined at the end of a probe cycle, in order to enhance performance. The protocol clearly achieves higher throughput and conserves more energy than TCP. Furthermore, while both protocols' throughputs and energy-expenditures degrade as increasingly problematic network conditions are simulated, WWP's performance relative to TCP's improves, using their performance under error-free conditions as a reference point. This demonstrates the effectiveness of the protocol, irrespective of implementation or testing environment limitations that might not have been taken into account, and which could be constraining either protocol's absolute performance. 3.2 Test Results All tests are undertaken using 5-MByte (5,242,880 bytes) data sets for transmission. The purpose of the tests was to evaluate the behavior of the two protocols in response to changes in the simulated network environment, such as congestion and transmission errors at different rates and for different duration. We took measurements of the total connection time and of the total number of bytes transmitted (i.e., including protocol control overhead transmissions, data segment retransmissions, etc.). Both factors significantly affect energy expenditure as well as throughput. Error conditions have two distinct characteristics – transmission errors, as opposed to excessive congestion - but one and the same result: segments are lost. Segments may or may not be Page

delayed during the On phases of VDELDROP (the delay effect of VDELDROP is somewhat random). A challenge for both protocols - also tested for - was whether, in the presence of errors, the response would be to reduce the window size (i.e., reduce the wave level in the case of WWP), or proceed aggressively with data transmission instead. Note that when burst errors, which by their nature are transient, occur and the sender reduces its window size in response, the achieved throughput will be below the maximum attainable under these conditions. In Table 1 (appendix) we present results for the 5-MByte data messages and VDELDROP On/Off phase duration of 10 seconds. In order to represent the energy expenditure overhead required to complete reliable transmission under different conditions, we use the Byte Overhead as a metric. This is the total extra number of bytes the protocol transmits, over and above the 5 MBytes delivered to the application at the receiver, from connection initiation through to connection termination. The Byte Overhead is thus given by the formula: Byte Overhead = Total - Base, where, Base is the number of bytes delivered to the high-level protocol at the receiver, and is given in the column Orig. Data of the table. It is a fixed 5 MBytes for all tests. Total is the total of all bytes transmitted by the transport layers , and is given in the column Total Bytes. This includes protocol control overhead, data segment retransmission, as well as the delivered data. Results are reported in the column Energy Expend. The time overhead required to complete reliable transmission under different conditions is given in column Time overhead using the formula: Time Overhead = Connection Time - Base, where, Base is the number of seconds required to deliver the 5 MBytes to the high level protocol at the receiver under error-free conditions, from connection initiation through to connection termination (column Time Ran for the test set 1 in the table). Connection Time is the corresponding amount of time required for completion of data delivery under error-prone conditions, and is given in column Time Ran. 6

6

6

6

7

The measured performance of the protocols is given in column Effective Throughput using the formula: Effective Throughput = Original Data / Connection Time, where, Original Data is the 5-MBytes data set (column Orig. Data). Connection Time is the amount of time required for completion of data delivery, from connection initiation to connection termination (column Time Ran). For VDELROP with On/Off phase behavior, On and Off phases were of equal duration; this phase duration is given in the column DROP Phase. The DROP Rate reported is the dropping rate for segments during the On phases, not the averaged overall drop rate across On/Off phases. Entries of 0 in both of the DROP Phase and DROP rate columns signify error-free conditions. As demonstrated by the results for test sets 2, 3, 4, 5, and 6, the behavior of WWP is very constant with respect to time (and hence throughput) and energy expenditure. The throughput achieved is far better than TCP's and the energy expenditure is far less. WWP adapts quickly to error phases. It does not automatically decrease its sender window size in response to a drop as does TCP. Instead, immediately upon experiencing a drop, it pauses in its data transmission and probes. It then checks the measured RTTs from the probe cycle to assess whether conditions allow continued transmission and, if so, at what level. It can adjust back immediately to a high wave level where appropriate, unlike TCP which applies graduated multiplicative/additive increases to its window size. This mechanism of WWP’s has two significant results: (i) data transmission is not wasted during sustained periods of degraded network capacity, and thus retransmissions are reduced to a minimum; and (ii) throughput is maximized since we can adjust to high wave levels immediately under appropriate conditions, thereby not wasting opportunities for successful data transmission by attempting less than the network would easily be able to accommodate. This is clearly demonstrated by the results for test sets 5 and 6. There, TCP reduces its window size significantly since the error rate is high. It then 7

7

Page

takes considerable amounts of time to readjust the size back up to an appropriate level, thereby "missing" the "good" phase. This results in trading off prolongation of the connection time in order to avoid retransmissions. The protocol is clearly inefficient under such conditions. These observations gain further strength from Table 2 (appendix). There we present results for tests that were run under the same experiment environment as for Table 1. Only the On/Off phase duration of VDELDROP differs: 4 seconds instead of 10. Consequently, TCP's window size is further reduced and not only is the connection time increased (columns Time Ran & Time overhead), but so are the total bytes transmitted (columns Total Bytes & Energy Expend.), and hence the energy expended. This can be easily explained. There are now more phases during which TCP "collapses" in response to errors, and its attempts to readjust its window size causes the following dynamic to occur. Segment drops that take place during the early part of the error-prone phase occur under conditions where the window size has been growing in response to the preceding error-free phase, and these phase transitions are now more frequent. Furthermore, connection time is increased since the average window size during the connection is now much smaller than before. WWP, in contrast, exhibits exemplary behavior: it stops and restarts data transmissions immediately upon entering and leaving the error phases, respectively; it is constantly readjusting its wave levels, commensurate with the error environment it is detecting. Test set 4 clearly demonstrates the vast difference between the two protocols' effectiveness under heavy error conditions. Table 3 (appendix) outlines the effectiveness of the protocols with respect to energy savings and throughput, as well as their relative behavior under varying conditions, and is based on the results presented in Tables 1 and 2. Column Energy Saving gives the number of additional bytes transmitted by TCP to complete the 5-MByte data delivery, over and above the total number of bytes transmitted by WWP under the same test conditions. Similarly, Time Saving gives the additional time taken by TCP compared to WWP. Energy Expend. Ratio is a measure of the relative energy expenditures of the two protocols, and is calculated as

8

TCP (Total Bytes Transmitted - 5 MBytes) / WWP (Total Bytes Transmitted - 5 MBytes). Throughput Ratio gives the relative throughput rates: TCP Effective Throughput / WWP Effective Throughput. Finally, Time Ratio shows the relative connection times for the two protocols (TCP/WWP). When error rates are low, TCP behaves well and its major weakness - inability to adequately readjust to rapidly changing conditions – does not catastrophically degrade its performance. WWP displays consistently good behavior, even when variability in the error environment is quite dramatic. Under such conditions of high variability, TCP wastes enormous amounts of energy and unexploited throughput capacity, and does not achieve good performance. Its throughput in all error-prone cases is well below what it is capable of achieving under error-free conditions, let alone the higher throughput displayed by WWP. As demonstrated by test sets 3 and 4, it is unable to take advantage of network conditions, expending up to fourfold more time than WWP. It is interesting to note (see test set 3) that TCP transmits up to 158 KBytes more than does WWP under the same conditions, in order to deliver its 5 MByte data set. So it is not even trading off lesser transmission effort at the expense of longer connection times in an effective manner. Indeed, it is the longer connection times of TCP that seriously aggravate the energy consumption characteristics of the protocol, rather than wasted transmission effort as such. Throughput can reach as low as only 23% of that achieved by WWP, although under error-free conditions it can achieve up to 58.6% of WWP’s corresponding throughput. Similarly, under errorfree conditions, their relative time ratio was 1.7/1; this ultimately grows to about 4/1 as error conditions deteriorate. Results presented in the table for the relative energy expenditure deteriorate to a value of 4.48/1, though we would argue that it really should reach even worse values were the total connection times to be taken fully into account. For example, if we were to estimate energy consumption per unit of idle time (i.e., when no transmission is taking place) as 20% of the energy consumed during a unit of transmission time (this, in fact, is device dependent), the energy consumed

Page

by WWP in order to deliver 5 MBytes of data could easily be calculated to be up to 30 - 50% less than that consumed by TCP. 4. Conclusion We have presented a reliable transport protocol that achieves high throughput and considerable energy-saving in network environments with significantly variable error characteristics. Our results clearly demonstrate that retransmissions and duplications can be avoided if we transmit data when network conditions allow it. In a sender - receiver context, the receiver passes control information to the sender which adjusts the data-wave level accordingly. The risk of congestion is estimated at the receiver and the higher this risk, the lower the wave level attempted. When the risk is too high, the sender remains idle, constantly probing the receiver for updates on congestion level. The total number of bytes transmitted is not significantly greater than the application's original message. Comparison results at different congestion levels demonstrate suitable behavior by the protocol. In contrast to TCP, totals for overall bytes transmitted as well as communication time are significantly less, so energy saving is proportionally higher. 5. References 1. M. Allman, V. Paxson, W. Stevens, "TCP Congestion Control", RFC 2581, April 1999 2. P. Amer, C Chassot, T. Connolly, P. Conrad, M. Diaz "Partial order transport service for multimedia and other applications", IEEE/ACM Transactions on Networking, 2(5), Oct. 1994 3. B. Dempsey, "Retransmission-based Error Control For Continuous Media Traffic in Packet Switched Networks", Ph.D. Thesis, University of Virginia, 1995 4. R. Marasli, P. Amer, P. Conrad, "An analytic Study of Partially ordered Transport Services" Computer Networks and ISDN Systems, 1998 5. R. Marasli, P. Amer, P. Conrad, "Retransmission-based partially reliable transport service": An analytic model", IEEE INFOCOM '96, CA, March 1996 6. R. Marasli, P. Amer, P. Conrad, "Partially Reliable Transport Service" ISCC '97, Alexandria, Egypt, July 1997.

9

7. V. Paxson, et.al., "Known TCP implementation problems", RFC 2525, March 1999. 8. L. Peterson, B. Davie, "Computer Networks: A Systems Approach", Morgan Kaufman, 1996 9. J. Postel, Transmission Control Protocol, RFC 793, September 1981. 10. Wei Songbin and Vassilios Tsaoussidis, "An Application oriented Transport Protocol for multimedia communications over IP", IEEE

Conference on Computational Intelligence and Multimedia Applications, ICCIMA '99, New Delhi, India, September 1999. 11. V. Tsaoussidis, H. Badr, R. Verma, "Wave and Wait Protocol: An energy saving transport protocol for mobile IP-Devices", 7th IEEE International Conference on Network Protocols, Toronto, Canada, October 1999. 12. The X-kernel: www.cs.arizona.edu/xkernel 13. 3COM, Technical Documents: "Palm OS 3.0 Documentation" www.3com.com, April 1998.

PROBE3/ N-S_ACK PROBE_T/ PROBE3

Active open/ DATA

ESTAB

WAIT_N-S_ACK

WAVE_RCV_ON

N-S_ACK/

SEND_T/ PROBE1

PROBE1/ PR1_ACK

PROBE_T/ PROBE1

PR1_sent

PR1_rcvd

PR1_ACK/ PROBE2 PROBE2/ PR2_ACK PROBE_T/ PROBE2

PR2_sent

PR2_rcvd PR2_ACK/ PROBE3

PROBE3/ N-S_ACK

Figure 1: Probing: State Transition Diagram

Test #

Prtcl

DROP Rate

DROP Phase

Time Ran

Orig. Data

Total Bytes

Time Overhead

Energy Expend.

Effective Throughput

1.1

TCP

0

0

69

5M

5319120

---

76240B

75983

1.2

WWP

0

0

40.5

5M

5268495

---

25615B

129453

2.1

TCP

5%

10sec

117.4

5M

5395782

48.4sec

152902B

44685

Page

10

2.2

WWP

5%

10sec

71

5M

5290519

30.5sec

47639

73843

3.1

TCP

10%

10sec

132.9

5M

5409384

63.9

166504

39449

3.2

WWP

10%

10sec

81.9

5M

5295859

41.4

52979

64015

4.1

TCP

20%

10sec

136.63

5M

5410822

67.63

167942

38372

4.2

WWP

20%

10sec

84.08

5M

5319556

43.58

76676

62355

5.1

TCP

33%

10sec

258.89

5M

5392682

189.89

149802

20251

5.2

WWP

33%

10sec

85.9

5M

5324601

45.4

81721

61034

6.1

TCP

50%

10sec

259.38

5M

5388348

190.38

145468

20213

6.2

WWP

50%

10sec

86.7

5M

5275294

46.2

32414

60471

Table 1

Test#

Prtcl

DROP Rate

DROP Phase

Time Ran

Total Bytes

Time Overhead

Energy Expend.

Effective Throughput

1.1

TCP

0

0

69

5319120

---

76240B

75983

1.2

WWP

0

0

40.5

5268495

---

25615B

129453

2.1

TCP

10%

4sec

135.67

5409384

66.67

166504

38644

2.2

WWP

10%

4sec

81.3

5306219

40.8

63339

64488

3.1

TCP

33%

4sec

368.95

5502300

299.05

259420

14210

3.2

WWP

33%

4sec

87.56

5344200

47.06

101320

59877

4.1

TCP

50%

4sec

385.79

5514338

316.79

271458

13589

4.2

WWP

50%

4sec

93.9

5378366

53.4

135486

55834

Table 2

T#

Prtcl

1.1

TCP

0

0

69

1.2

WWP

0

0

40.5

2.1

TCP

10%

4-10sec

132-135

2.2

WWP

10%

4-10sec

81.3-81.9

3.1

TCP

33%

4-10sec

258-368

3.2

WWP

33%

4-10sec

85.9-87.6

4.1

TCP

50%

4-10sec

259-385

4.2

WWP

50%

4-10%

86.7-93.7

Page

DROP Rate

DROP Phase

Time Ran (sec)

Energy Savings

Time Savings (sec)

Energy Expend. Ratio (TCP/WWP)

Throughput Ratio (TCP/WWP)

Time Ratio (TCP/WWP)

50KB

28.5

2.97/1

58.6%

1.70/1

103113KB

19.622.5

2.62-3.14/1

59-60%

1.62-1.64/1

68158KB

173-281

1.83-2.56/1

23-32%

3.0-4.2/1

113135KB

172-291

2.0-4.48/1

24-33%

2.98-4.1/1

2

Table 3

Page

3