Congestion Avoidance Hybrid Wireless Mesh

0 downloads 0 Views 1MB Size Report
... requires the prior written permission of Kishwer Abdul Khaliq or M.A.Jinnah authorities. ... It is certified that the research work titled “Congestion Avoidance Hybrid Wireless .... 4.13 PDF Comparison of HWMP and CA-HWMP at 350Kbps . . .... provided by different organizations, often has three types of nodes; mesh clients,.
Congestion Avoidance Hybrid Wireless Mesh Protocol (CA-HWMP) for IEEE 802.11s

By Kishwer Abdul Khaliq (MS103017)

A thesis submitted to the Department of Computer Sciences in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE IN COMPUTER SCIENCES

Faculty of Computing Mohammad Ali Jinnah University Islamabad August, 2013

Copyright ©2013 by MAJU Student All rights reserved. Reproduction in whole or in part in any form requires the prior written permission of Kishwer Abdul Khaliq or M.A.Jinnah authorities.

Dedicated to My Loving Parents and Friends

CERTIFICATE OF APPROVAL It is certified that the research work titled “Congestion Avoidance Hybrid Wireless Mesh Protocol (CA-HWMP) for IEEE 802.11s” carried out by Kishwer Abdul Khaliq, Reg. No. MS103017, under the supervision of Dr. Amir Qayyum, at Mohammad Ali Jinnah University, Islamabad Campus. It is fully adequate, in scope and in quality, as a thesis for the degree of MS of Computer Sciences. Supervisor: −−−−−−−−−−−−−−−−−−− Dr. Amir Qayyum Professor/Dean Department of Electronic Engineering Faculty of Engineering Mohammad Ali Jinnah University, Islamabad Internal Examiner: −−−−−−−−−−−−−−−−−−−−− Dr. Muhammad Azhar Iqbal Assistant Professor Faculty of Computing Mohammad Ali Jinnah University, Islamabad External Examiner: −−−−−−−−−−−−−−−−−−−−− Prof. Dr. Muhammad Sher Dean of Science Faculty Department of Computer Science & Software Engineering International Islamic University, Islamabad Dean: −−−−−−−−−−−−−−−−−−−−− Dr. Muhammad Abdul Qadir Professor Faculty of Computing Mohammad Ali Jinnah University. Islamabad

Acknowledgments All praises to Almighty Allah, Who is the light, when everything is dark and the One, Who guides us through the depths of darkness. Little rain drops in a cloud can convert ordinary sunlight into breath taking colors of rainbow in the same way I have maneuver this thesis in a comprehensive way which firstly seemed as understanding as an odyssey. It was a quite good experience. I enjoyed throughout the research thesis but also enhanced my learning immensely. I am greatly thankful to my supervisor Dr. Amir Qayyum for being a guiding ray of light throughout research and during accomplishment of my thesis. I am also thankful to members of CoReNeT for providing feedback in weekly meetings. My special thanks to Mr. Ehsan Elahi, Mr. Muhammad Sajjad Akbar and Mr. Asim Rasheed for their kind advice and help.

Declaration It is declared that this is an original piece of my own work, except where otherwise acknowledged in text and references. This work has not been submitted in any form for another degree or diploma at any university or other institution for tertiary education and shall not be submitted by me in future for obtaining any degree from this or any other University or Institution.

Kishwer Abdul Khaliq MS103017

August, 2013

Abstract The wireless technology is utilized for numerous applications owing to easy deployment and mobility support. Nowadays, wireless technologies are commonly used in a variety of electronic equipments including military and general purpose applications. Wireless Mesh Networks (WMN) is an evolving technology to provide un-disrupted data connectivity to mobile users. It provides high bandwidth through efficient resource sharing, specially for areas deprived of wired connectivity. In addition to multi-hop wireless connectivity, these networks also offer self-configuring, self-healing and self organizing capabilities. IEEE 802.11s is a MAC standard being proposed as an enhancement for WMN. This standard includes the use of a mandatory routing protocol at MAC layer, known as Hybrid Wireless Mesh Protocol (HWMP). However, HWMP offers weak response to congestion, especially for interactive applications. IEEE 802.11s has also proposed sharing of Congestion Control Notification Frame (CCNF) on reaching congestion threshold. However, CCNF has not been used by HWMP. Currently, a few researchers have proposed congestion control techniques for IEEE 802.11s, however, no scheme for congestion avoidance in IEEE 802.11s is found in literature. In this thesis, we have proposed a congestion avoidance technique named Congestion Avoidance Hybrid Wireless Mesh Protocol (CA-HWMP). The proposed technique offers localized re-routing based upon congestion threshold, with minimal overheads. The re-routing decision is taken by the each node using CCNF received from the next hop neighbor. The proposed algorithm do not add any overhead as the CCNF is already the design part of 802.11s. To achieve the optimum results after simulations, 60% congestion threshold is selected for re-routing, instead of standard defined rate for congestion notification only. This thesis also show comparison of proposed technique against HWMP under NS3. vii

From the comparison we concluded that use of CA-HWMP as replacement of HWMP significantly improves the performance of IEEE 802.11s.

viii

Table of Contents 1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . .

2

1.1

IEEE 802.11s: Wireless Mesh Network . . . . . . . . . . . . . . . .

3

1.2

Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . .

6

1.3

Research Objective . . . . . . . . . . . . . . . . . . . . . . . . . . .

6

1.4

Research Contribution . . . . . . . . . . . . . . . . . . . . . . . . .

7

1.5

Research Methodology . . . . . . . . . . . . . . . . . . . . . . . . .

7

1.6

Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . . .

8

2 LITERATURE REVIEW . . . . . . . . . . . . . . . . . . . . . .

9

2.1

IEEE 802.11s Functional Components . . . . . . . . . . . . . . . . .

2.2

802.11s Path Selection . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2.1

2.3

Hybrid Wireless Mesh protocol (HWMP) . . . . . . . . . . . 10

Congestion Control Mechanisms in WMN . . . . . . . . . . . . . . 14

3 PROPOSED CONGESTION AVOIDANCE TECHNIQUE . . 3.1

21

Proposed Congestion Avoidance Hybrid Wireless Mesh Protocol (CA-HWMP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.1.1

3.2

9

Proposed Algorithm . . . . . . . . . . . . . . . . . . . . . . 23

Advantages and Limitation of CA-HWMP . . . . . . . . . . . . . . 25

4 SIMULATION AND PERFORMANCE EVALUATION . . . .

26

4.1

CA-HWMP Patch for NS3 . . . . . . . . . . . . . . . . . . . . . . . 26

4.2

Simulation Parameter . . . . . . . . . . . . . . . . . . . . . . . . . . 27

4.3

Evaluating Parameter . . . . . . . . . . . . . . . . . . . . . . . . . . 28 4.3.1

Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

4.3.2

Average End to End Delay . . . . . . . . . . . . . . . . . . . 28

4.3.3

Packet Delivery Fraction (PDF) . . . . . . . . . . . . . . . . 29 ix

4.4

Simulation Scenarios and Parameters . . . . . . . . . . . . . . . . . 29 4.4.1

Effect of Application Data Rate on Throughput . . . . . . . 30

4.4.2

Effect of Application Data Rate on Packet Delivery Fraction(PDF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

4.4.3

Effect of Application Data Rate on End-to-End Delay . . . . 52

5 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . .

59

5.1

Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

5.2

Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

x

63

List of Figures 1.1

Infrastructure/backbone WMN . . . . . . . . . . . . . . . . . . . .

4

1.2

Protocol Stack of IEEE 802.11s . . . . . . . . . . . . . . . . . . . .

5

2.1

802.11s Functional Component Architecture . . . . . . . . . . . . . 10

2.2

HWMP Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.3

Reactive Mode (Path discovery from source A to destination D) . . 13

2.4

Congestion Control Scenario for TCC . . . . . . . . . . . . . . . . . 19

2.5

Congestion Control Scenario for LSCC . . . . . . . . . . . . . . . . 19

3.1

Proposed CA-HWMP protocol Mechanism . . . . . . . . . . . . . . 22

4.1

m × n Mesh Topology . . . . . . . . . . . . . . . . . . . . . . . . . . 30

4.2

Throughput Comparison of HWMP and CA-HWMP at 100Kbps . . 32

4.3

Throughput Comparison of HWMP and CA-HWMP at 150Kbps . . 34

4.4

Throughput Comparison of HWMP and CA-HWMP at 200Kbps . . 35

4.5

Throughput Comparison of HWMP and CA-HWMP at 250Kbps . . 37

4.6

Throughput Comparison of HWMP and CA-HWMP at 300Kbps . . 38

4.7

Throughput Comparison of HWMP and CA-HWMP at 350Kbps . . 39

4.8

PDF Comparison of HWMP and CA-HWMP at 100Kbps . . . . . . 42

4.9

PDF Comparison of HWMP and CA-HWMP at 150Kbps . . . . . . 44

4.10 PDF Comparison of HWMP and CA-HWMP at 200Kbps . . . . . . 46 4.11 PDF Comparison of HWMP and CA-HWMP at 250Kbps . . . . . . 48 4.12 PDF Comparison of HWMP and CA-HWMP at 300Kbps . . . . . . 49 4.13 PDF Comparison of HWMP and CA-HWMP at 350Kbps . . . . . . 51 4.14 Delay Comparison of HWMP and CA-HWMP at 100Kbps . . . . . 53 4.15 Delay Comparison of HWMP and CA-HWMP at 150Kbps . . . . . 54 4.16 Delay Comparison of HWMP and CA-HWMP at 200Kbps . . . . . 55 4.17 Delay Comparison of HWMP and CA-HWMP at 250Kbps . . . . . 56 xi

4.18 Delay Comparison of HWMP and CA-HWMP at 300Kbps . . . . . 57 4.19 Delay Comparison of HWMP and CA-HWMP at 350Kbps . . . . . 58

xii

List of Tables 4.1

General Simulation Parameter . . . . . . . . . . . . . . . . . . . . . 27

4.2

PDF when Data rate is 100Kbps . . . . . . . . . . . . . . . . . . . 41

4.3

PDF when Data rate is 150Kbps . . . . . . . . . . . . . . . . . . . 43

4.4

PDF when Data rate is 200Kbps . . . . . . . . . . . . . . . . . . . 45

4.5

PDF when Data rate is at 250Kbps . . . . . . . . . . . . . . . . . . 47

4.6

PDF when Data rate is 300Kbps . . . . . . . . . . . . . . . . . . . 49

4.7

PDF when Data rate is 350Kbps . . . . . . . . . . . . . . . . . . . 50

xiii

List of Acronyms AODV AODV-ST CCNF CA-HWMP DSS DSN DS DO ESS HWMP IBSS MANET MP MAC MAP MPP MSDU NS NIC PREQ PREP PERR PSCC PDF QoS RANN RF STA SLCC TCP TTL TCC UDP WMN WCP WCPCap

Adhoc on Demand Distance Vector AODV-Spanning Tree Congestion Control Notification Frame Congestion Avoidance Hybrid Wireless Mesh Protocol Distributed System Service Destination Sequence Number Distributed System Destination Only Extended Service Set Hybrid Wireless Mesh Protocol Independent Basic Service Set Mobile Adhoc Wireless Network Mesh Point Medium Access Control Mesh Access Point Mesh Portal MAC Service Data Units Network Simulator Network Interface Card Path Request Path Reply Path Error Path Selective Congestion Control Packet Delivery Fraction Quality of Service Root Announcement Reply and Forward Stations Selective Link Congestion Control Transmission Control protocol Time-to-Live Total Congestion Control User Datagram Protocol Wireless Mesh Network Wireless Control Protocol WCP with Capacity estimation

1

Chapter 1 INTRODUCTION The wireless technology is used for a number of applications like broadband home networking, community and neighborhood networks, automation system like building automation, enterprise networking, mainly due to its key advantages like ease of deployment and mobility support. Nowadays wireless technologies are used in many strategic and low cost e-applications (e.g. e-health, e-environment, ecommerce, e-government). IEEE 802.11 variants are available widely and are very common for WLAN scenarios. Mobile Adhoc Wireless Network (MANET) is another domain in wireless network which makes use of current wireless technologies for multihop scenarios. The ongoing research on wireless technology focuses on military applications and some specialized civilian applications. Users are more interested in general purpose applications where high bandwidth and internet connectivity is in their high priority. Therefore, a new multi-hop wireless network type has emerged in the market which has advantages of both ethos multi-hop and broadband access and is called Wireless Mesh Network (WMN). WMN is a propitious technology due to its quite low cost of deployment and connectivity of wireless broadband. It is also deployed in rural areas to provide broadband services because of it reduced cost of deployment. There are many applications which are efficiently supported by WMN like broadband wireless access, industrial applications, health care, transportation systems, hospitality, warehouses and temporary venues [1] [2] [3]. It is also used for deploying wide variety of applications like e-applications [4], public safety and crises management appli-

2

cations [5], building automation control [6], emergency and safety applications [7] due to its self-healing, self-configuring and self-organizing capabilities.

1.1 IEEE 802.11s: Wireless Mesh Network WMN is a multi-hop wireless network where nodes may act as a relay station and can communicate directly without involving a base station. WMN solutions [8] provided by different organizations, often has three types of nodes; mesh clients, mesh routers and gateways. WMN is one of the important technology for next generation wireless networking due to some of its advantages over other wireless networks. WMN can be infrastructure backbone WMN, client WMN, or Hybrid WMN. Infrastructure/backbone WMN provides backbone access to the conventional clients and also provides integration with existing networks as shown in Figure 1.1 [9]. IEEE 802.11s two-tier [10] mesh network consists of backhaul and access tier. Communication between mesh nodes is represented as wireless mesh backbone/ backhaul while communication between wireless mesh nodes and clients to the clients is called an access-tier [11]. The mesh routers forward traffic on behalf of neighbor nodes like in multi-hop adhoc network. In IEEE 802.11 standard [12], adhoc mode nodes can connect each other without any central entity or coordinator such as an Access Point (AP). Therefore many interesting applications where adhoc mode is not sufficient to fulfill the requirements, like support of internet connectivity and presence of client nodes. Therefore, both types of modes i.e. infrastructure and adhoc are integrated in a new type of wireless network, named as Wireless Mesh Network. IEEE developed the IEEE 802.11s standard for multi-hop WMN whose final draft was released in September 2011. IEEE 802.11s inherently depends upon one of the IEEE 802.11

3

Fig. 1.1: Infrastructure/backbone WMN

variants like a, b, g, and n for multi-hop WMN. WMN is a combination of IBSS and ESS advantages. There are three types of nodes involved. Mesh point (MP) acts as a relay station and does not have AP functionality. MPs connect to one another with wireless links. Therefore internal wireless mesh LAN is not an ESS. Another type is Mesh Access Point (MAP) which has AP functionality in addition to MP functionality. Station nodes associate themselves with MAP. The third type of node is Mesh Portal (MPP) which also have MP functionality and is a point where MSDUs enter or exit WMN to other network. IEEE 802.11s MAC is the extension of existing IEEE 802.11 MAC, having some additional functionality like routing protocol is handled at MAC layer whereas layer 3 routing protocols are needed at MPP for path selection while com-

4

municating with other networks [8]. The protocol stacks of the IEEE 802.11s is shown in Figure 1.2. As path selection mechanism is handled at the MAC layer,

Fig. 1.2: Protocol Stack of IEEE 802.11s

therefore may also be handled congestion at MAC layer. Congestion is an important issue which occurs when network the traffic load is such that the greater numbers of sent packets are greater than the network capacity e.g. available bandwidth or buffer at the routers [13]. In the wireless congestion scenario, packet lost or queue overflow are caused by wireless communication issues e.g. greater error rate due to wireless channel, wireless bandwidth etc. In wireless, one node sends data at a time due to shared characteristics of the wireless channel. The increase in packets delay and queue length leads to congestion. IEEE 802.11s does not specified any congestion control mechanism. However, various congestion control

5

techniques are proposed in literature to resolve this problem which have their own pros and cons.

1.2 Problem Statement In WMN, due to increase in use of multimedia or interactive applications on the Internet, MPs near to MPP have greater traffic, and hence there are greater chances of congestion at the MPs near to the MPP. When congestion occurs, packets are dropped from the queue regardless of the number of hops the packets have already traversed. In a WMN, there may be multiple paths which exists from source to destination. The shortest path from source to destination is selected to send the data. When multiple node’s data is forwarded using same intermediate nodes due to shortest path, congestion may occurs at those intermediate nodes. To address this problem, we proposed a congestion avoidance mechanism and compared it against default WMN routing protocol.

1.3 Research Objective In a multi-hop network, the increase in intermediate hops results in the decrease of throughput as discussed in [9] [14]. In WMN routing is handled at the MAC layer and MP forwards a packet to another MP through intermediate MPs. The intermediate node buffers the traffic and processes the packet using Drop Tail queue management scheme. In this scheme, if the queue is already full, the newly arrived packet is dropped, although these packets may have already traversed greater number of hops. In WMN, congestion control issues are also addressed at the MAC layer as path selection mechanism is handled at the layer 2. MPs forward their own data or received data to other mesh points and all traffic is aggregated at the portal node. In case of the high traffic load, mesh points on the outer edges of the network suffer 6

low throughput and greater packet loss in the absence of any congestion control mechanism. The standard draft outlines an optional congestion control mechanism known as hop-by-hop congestion control mechanism [15]. In this prospective, our research goal is to address the problems in existing congestion control mechanism and devise a solution that avoids congestion instead of controlling the congestion when it has actually occurred in the network. We appraised the default compulsory routing protocol of IEEE 802.11s named HWMP by introducing high application data rate in the network. We also evaluated our proposed technique and compared it with the standard WMN protocol.

1.4 Research Contribution WMN is the new emerging technology. Its research is active across different domains. As routing protocol is handled at MAC layer therefore, congestion is also handled at its MAC layer. Congestion is one of the active research domain of WMN. Few techniques are proposed for congestion control. However, they have their own limitations. Our research contribution is the identification of these limitations of existing congestion control mechanisms and devising a new congestion avoidance technique which shows better performance.

1.5 Research Methodology We have used Network Simulator 3 (NS-3) as at tool for the simulation. NS-3 is a good simulation tool, bases on object oriented language C++. It is an open-source network simulator which provides the support of IEEE 802.11s mesh module. It also offers the flexibility of implementation of our proposed technique. In our research, we need to default routing protocol i.e. HWMP of IEEE 802.11s with the proposed routing protocol. Our comparison includes the following differences between the default routing protocol of IEEE 802.11s and proposed congestion 7

avoidance routing protocol. ˆ Effect of application data rate on throughput ˆ Effect of application data rate on packet delivery fraction ˆ Effect of application data rate on delay

1.6 Thesis Organization The rest of the thesis is organized as follows: The Chapter 2 describes the literature review of IEEE 802.11s WMN and its functional components. It discusses the recommended IEEE802.11s mandatory protocol HWMP in detail. It also discusses the IEEE 802.11s standard description about congestion control mechanism and other congestion control mechanism suggested by some researchers with their pros and cons. The Chapter 3 discusses the our proposed routing protocol for congestion avoidance, its algorithm and limitations. The Chapter 4 includes the simulation setup, simulation parameters and evaluating parameters with its results and a comprehensive analysis. The last Chapter 5 provides the conclusion and the recommendations for some future work.

8

Chapter 2 LITERATURE REVIEW This chapter gives the overview of IEEE 802.11s Functional Components, and its mandatory routing protocol at MAC layer. It also gives a survey of already proposed congestion control mechanisms for WMN at MAC Layer and limitations of these protocols.

2.1 IEEE 802.11s Functional Components The functional architecture of the IEEE 802.11s [16] is represented in the Figure 2.1. In includes the Physical layer of one of the IEEE 802.11 variants a, b, g, or n and includes enhancements in the lower MAC for the mesh (IEEE 802.11e/n). At layer 2, it includes beaconing and synchronization, path selection, frame aggregation, multi-channels, multi-radios operations. Our work deals with congestion control. 802.11s MAC also includes functions for QoS (performing priority Control), congestion control and admission control, a function for achieving spatial frequency reuse, a function for performing preventing degrading performance due to expose and hidden problems [16]. While working on congestion control, some researchers consider its queue mechanism, link capacity and routing protocols. They proposed techniques to resolve this problem. The given under sections include detailed discussion on its mandatory routing protocol and congestion control mechanism. These section also include some discussion on already proposed techniques.

9

Fig. 2.1: 802.11s Functional Component Architecture

2.2 802.11s Path Selection In the WMN, all the mesh nodes use the same path metric and protocol for the path selection. Path selection in IEEE 802.11s is performed at layer-2 and IEEE 802.11s is the first draft that proposed the routing at layer-2. In the Internet Draft 1.0 of the IEEE802.11s, two path selection mechanisms [15] were proposed. One is a mandatory protocol named HWMP while the other one is optional named RadioAvoidance Optimized Link State Routing (RA-OLSR). Author in [17] discussed the detail comparison of both protocols. In the later drafts [12] the optional path selection mechanism has been removed. RA-OLSR [18] adopted to work at layer two instead of layer three.

2.2.1 Hybrid Wireless Mesh protocol (HWMP) HWMP is a hybrid protocol, intentions at merging the advantages of both reactive and proactive approaches. It merged the functionality of proactive tree based approach with the distributed path selection protocol simultaneously. This proto10

col is enthused on the Adhoc On Demand Distance Vector (AODV) protocol and on its extension AODV-ST. It is the default routing protocol [19] and it must be implemented all MPs. In WMN, the network infrastructure has minimal mobility and most of the traffic is to/from the internet, but the some devices such as laptop, handsets can also be an MP and requires mobility support. Therefore, HWMP is used to get the integration advantages of the reactive and proactive routing. In HWMP, the reactive protocol is for those mesh nodes that experience changing environment; hence no fix topology. The other mode of this protocol is proactive tree based routing, which is used for fixed network topology. This protocol supports obligatory metric used for routing. This includes airtime cost for link quality and also some other metrics in extensions are QoS parameter, traffic load, power consumption etc. Initially root node is configured in a mesh network. In the presence of root node configuration, the proactive routing is applied as with the root node distance vector tree is built. This tree is used for the other nodes to avoid the unnecessary routing, routing path discovery and recovery overhead. The lean-to of AODV is made for IEEE 802.11s using a basic feature of AODV routing. In the path discovery four information elements are itemized and each node in the path participates to the metric calculation. This metric calculation is performed by using management frames for exchanging routing these elements during path discovery. These elements are Path Request (PREQ), Path reply (PREP), Path Error (PERR), and Root Announcement (RANN). All these information elements except the PERR hold the Destination Sequence Number (DSN), timeto-live (TTL), and metric field. The mechanism of HWMP is shown in the figure 2.2 [20]. In the on demand routing mode, PREQ elements broadcast by the source MP

11

Fig. 2.2: HWMP Mechanism

to establish a path to the destination MP. Independent of the reactive and proactive mode, This is the broadcast message generated by the source mesh STA to the destination mesh STA for discovering the path. When an intermediate MP receives PREQ, it updates/creates the path information to the source depending upon the DSN and link metrics. If a DSN is greater or equal but having a better link metric, the MP will update the better path with a better metric as shown in figure 2.3. Otherwise, if the node is not a destination, it forwards the PREQ element further. PREP element is sent from destination MP back to source MP. Independent of the reactive and proactive mode, this PREP element is the reply back to the source node of a destination node. PERR element is used to acquaint an error message during path discovery e.g. path is not available anymore. There are two flags [8] that used to handlebar the different situations during path discovery to forward the PREQ elements. These flags are named as Target only (TO), reply-and-forward (RF). If TO flag is set 1 which is its default value, then the intermediate node just forward the PREQ message to the next hop MP to the destination reached. On receiving PREQ by destination MP, it will send a uni-cast PREP back to the source MP. If both flags are set to 0, the intermediate node does reply to the source MP and does not forward the PREQ. Therefore protocol uses already existing route from the intermediate node to the destination. If TO

12

flag is set to 0 and RF flag is set to 1, then the intermediate node does send a uni-cast PREP to source and also make changes in the Do flag for 1 and forward the PREQ to destination MP.In proactive tree based routing only two information component PREQ and RANN are used. In proactive mode PREQ mechanism, the root node broadcasts the PREQ pe-

Fig. 2.3: Reactive Mode (Path discovery from source A to destination D)

riodically. On receiving the PREQ, an MP updates the path. MP also records the metric, hop count to the root and updates the PREQ and then forwards it. There is a proactive PREP bit in the proactive PREQ element frame, if it is set to 1, then the receiving MPs sends gratuitous PREP to the root. If this bit is set to 0, the receiving MPs sent gratuitous PREP only when the data is needed to send between the MP and the root with bidirectional route. In the proactive mechanism, the root node floods the RANN element into the network periodically. On receiving RANN element at MP, it needs to create/update the route to the root, it sends uni-cast PREQ to the root and the root sends PREP back to the 13

source MP. In case of uni-cast PREQ, the reverse path is established between the root node to the originated MP while uni-cast PREP creates the forward route from the originated MP to the root node. The problem with this protocol is with the tree based routing. This mechanism may not be the best possible because in a scenario when two non-root nodes are able to exchange the data on the low cost mesh path but due to the basic mechanism of the protocol, they are forced to send a frame through root node (comparatively high cast link). As our target is to intra-mesh congestion control using IEEE 802.11s mandatory path selection protocol. We study this protocol to find out possibility to resolve congestion control problem. Its four information elements and flags are the most important component that can be used to share extra information.

2.3 Congestion Control Mechanisms in WMN Congestion is the important issue in all types of networks. It occurs when the network traffic load in the network is such that the numbers of sent packets are greater than the network capacity. Network capacity includes network resources, available bandwidth or buffer of the network [21]. Depending upon the network type Congestion is grouped into two categories i.e. congestion in wired network and congestion in wireless network [13]. In wireless scenario, packet loss for queue overflow is caused by wireless reasons e.g. greater error rate due to wireless channel, lower bandwidth than wired line. In the wireless network, one node sends data at a time due to share characteristics of the wireless channel. Due to mentioned reasons, the increase in packets delay and queue length leads to congestion. In a WMN, traffic aggregated at the portal nodes. In case of the greater traffic load, MPs on the outer edges of the network suffer low throughput and greater

14

packet loss in the absence of any congestion control mechanism [13]. The standard draft outlines an optional congestion control mechanism known as hop-by-hop congestion control mechanism [12]. In the hop-to-hop congestion mechanism, each MP in the WMN monitor the level of congestion by observing the incoming and outgoing traffic. When the traffic load reaches at specific threshold level, where the incoming data rate is greater than the outgoing data rate. The congested node should notify to its immediate neighbors. The immediate neighbors limit their rate. There are basically three steps used in hop-to-hop congestion control mechanism as mention in the draft; Local Congestion Monitoring: In this step, each MP in the WMN monitors its congestion rate. For monitoring the mesh node may use the queue size as metric to identify the congestion or minimize the queue size by regulating the incoming and outgoing data rate. Congestion Control Signaling: When the transit queue level reaches at the specific level, Congestion Control Notification Frame (CCNF) broadcast to immediate nodes. On receiving the CCNF frame, the entire immediate node limit their data rate based on service differentiation criteria. CCNF frame also contains an expiry time for this notification. A mesh node may also send a congestion control request [19] to the selected nodes to limit their data rate according to channel rate. Local Rate Control: Upon receiving congestion notification frame, the nodes limits their traffic rate. MAP also considers congestion rate as STA does not know about the congestion control mechanism. In [22] proposed a distributed adapted mechanism with the hop-by-hop feedback of congestion information. The proposed technique is for WMN with multi-channel MAC, having two NICs on a single node at the same time. Both NIC’s operate

15

independently. They first derive end-to-end algorithm, then further develop hopby-hop congestion control algorithm which control the source rate directly. In this algorithm the novel technique is the adjustment of transmission rate at preceding nodes after receiving congestion control indication. By assuming, each intermediate node has the total knowledge for each flow. The congestion controller on each intermediate node calculates sum of all congestion states in the path and maximum transmission rate for each flow. It is the controller responsibility to monitor data arrival rate and data transmission. It chooses the smaller one for maximum possible rate for actual transmission rate. The limitation of the proposed technique is that it uses a feedback mechanism which increases control messages in the network. The combination of two algorithms for congestion control increases processing cost on each node too and every node must require two NICs. In [23] co-ordinated congestion control algorithm is proposed to provide end-toend max min fairness for each flow. This algorithm is designed for both inter-flows and intra-flows and work with multi-hop wireless link. In this paper centralized traffic engineering approach is used, where max-min fair share of individual wireless link is computed continuously. Each link assigned a bandwidth share. Flows that share each link use their allocated share in a fair way. A gateway node is connected to the wired internet, therefore mostly traffic is passed through gateways. According to the author a gateway can act as central coordinator to perform traffic engineering. Similar fair share congestion control problem is discussed in [24]. These algorithms are fit in unfair channel sharing scenarios. Unfair channel sharing can also lead to congestion. However, Considering these fair share algorithms in those scenarios where a network is congested due to high arrival rate, then we require a technique which reduces the arrival rate on a specific node rather apply a bandwidth share algorithm on a specific node. Ultimately a node can receive

16

and forward data up to its maximum capability limit. WCP [25] is a source base congestion control algorithm in which for every flow, the source node maintains rate. This rate represents the sending rate of the specific flow. WCP uses the phenomena of Additive increase and multiplicative decrease (AIMD. This algorithm is proposed for multi-hop WMN using IEEE 802.11 MAC. It estimates the neighboring node capacity and then share it among contented nodes. This mechanism is called WCPCap. Still if a node receives data from multiple nodes and forwards to other nodes, the problem will remain same. In multi-hop WMN, there is possibility of alternate path. A node can forward data at the same rate using alternate path. Also this algorithm is proposed for IEEE 802.11 MAC and no routing is handled at MAC layer for multi-hop WMN, therefore scalability is the question mark. As each node maintain rate for each flow, and every node in the WMN is the router, therefore more delay added. In the same scenario battery conception is another weak point. Different algorithms were devised for congestion control to provide solution for different issues of intra mesh congestion scenarios using congestion notification. Total Congestion Control (TCC) and Link Selective Congestion Control (LSCC) [21] were proposed which resolved some issues but in some scenarios their algorithms do not work well. While using TCC, the total traffic was blocked for the congested node on receiving Congestion Control Notification Frame (CCNF). This frame is broadcast by a node whose queue is full due to congestion. In LSCC algorithm, on receiving CCNF, it limit the traffic for specific link by blocking the data packets for a specific node. The CCNF contains the expiry time of CCNF frame. In LSCC scenario, the CCNF also provides the information about the congested link, nodes which received the CCNF frame, block the traffic for that specific link. In Path Selective Congestion Control(PSCC) [26], on the receiving

17

of CCNF, nodes only block the data frame for specific destination. In PSCC, congested node broadcast CCNF to limit the traffic for specific destination. The congested node provides this information by adding a destination address for a specific flow into CCNF. For the announcement of specific destination, this algorithm requires modification in the standard CCNF. On receiving modified CCNF, a node only blocks sending data for a specific destination, but it continuously receives data for that specific node. The scenario becomes more complicated when CCNF frame is further broadcast to immediate node in a continuous chain. These algorithms resolve congestion problems in some scenarios of multi-hop WMN uses IEEE 802.11s MAC. Considering a congestion scenario of IEEE 802.11s as shown in Figure 2.4, the link between mesh node C and mesh node D is congested, therefore when the queue size at node C reached at the maximum specified value, then mesh node C broadcast the Congestion Control Notification Frame (CCNF) to the immediate node to limit their traffic for node C. Node E and node B are in the neighbors of node C. On receiving CCNF, both the neighboring node present in the current scenario will stop sending data for node B. The frame contains the expiry time of congestion control notification frame (CCNF) which contain the expiry time of the congestion estimated time. In this scenario if we use TCC algorithm, the immediate neighboring nodes stop sending data. These nodes only receive date from other nodes and queue all traffic. If CCNF expiry time reaches, nodes forward data. Otherwise when the queue level of these nodes also reaches to a threshold value, they also broadcast CCNF to their neighboring nodes. This process continues and at the end whole network congested. Now if we use LSCC in the current scenario, the immediate neighboring nodes will not send the data for node C until the CCNF time expired. During this time, node

18

Fig. 2.4: Congestion Control Scenario for TCC

Fig. 2.5: Congestion Control Scenario for LSCC

B will queue the packets for the node C and the rest of the traffic will continue forward. Even node C will continue to send local and global traffic for its neighboring nodes i.e. node E and node B. When the buffer packets in the queue of the node B reached on the specific thresh hold value, the node B also broadcasts Congestion Control Notification Frame (CCNF) to the immediate neighbors; in the given scenario node A, node F and node C are in Immediate neighbors of node B. Node A and F will receive the notification frame and block their traffic for node B while

19

node C will unable to receive the notification as traffic for node C was already blocked. Hence the node C as shown in Figure 2.5 will continue to send its local traffic or already queued global traffic for node B which results the packet lost. This situation becomes more complicated if node A also congested and broadcast CCNF. In a mesh network, due to the use of multimedia or interactive application on internet, MPs near to MPP have greater traffic, therefore greater chances of congestion at the MPs near to the MPP. When congestion occurs, packets drop from queue regardless of the number of hops the packets already have traversed. To resolve this problem, we proposed another technique which avoids the congestion rather than control the congestion in order to reduce packet loss and improve network throughput.

20

Chapter 3 PROPOSED CONGESTION AVOIDANCE TECHNIQUE This chapter provides a detailed description of the proposed technique for congestion avoidance. It gives explanation how is the proposed technique is better than the existing protocol. This Chapter also describes the pseudocode of the proposed technique.

3.1 Proposed Congestion Avoidance Hybrid Wireless Mesh Protocol (CA-HWMP) Congestion Control technique or algorithms apply when congestion has introduced into the network. Our proposed technique focused on the congestion avoidance rather on congestion control. In our proposed technique, as we are working on WMN where path selection is performed at the MAC layer, therefore we utilize its mandatory protocol for congestion avoidance. Our proposed routing protocol Congestion Avoidance-Hybrid Wireless Mesh protocol (CA-HWMP) is the modification in the current mandatory routing protocol HWMP for IEEE 802.11s monitor queue size at every node. The basic information elements used in the proposed mechanism are same as used in the HWMP i.e. PREQ, PREP, PERR, RANN (Root Announcement). In the proposed routing protocol, when the queue size at node B reached at specified queue value and it check the maximum virtual buffer size, then it broadcasts the CCNF frame to its neighbors. All the neighbor nodes, who send data through 21

node C, will send PREQ to find the new path to the destination excluding the existing paths through node C. Re-routing reduce traffic load at a specific link/node. The remaining traffic forwarded to the destination using the new calculated path. This mechanism, not only reduce the congestion problem but also load balance in the multi-hop WMN. Considering a scenario given in Figure 3.1, in which node A, F and C are in

Fig. 3.1: Proposed CA-HWMP protocol Mechanism

the neighbor of node B. Node G and H are in the neighborhood of A. Node G sends data to node C, the optimal path selected by its routing protocol HWMP is G− > A− > B− > C. The link between B and node C is congested. As proposed routing protocol monitors the queue size at the every mesh node. At node C, when its queue size reaches the specific queue size threshold , then routing protocol at node B broadcast CCNF frame to its neighbor list. Neighboring nodes who receive CCNF, send PREQ to calculate a new path for the desired destination. All the

22

immediate nodes in the node A receive the PREQ and check their queue status, if the receiving node has already reached at the specific queue specified value then it will ignore the PREQ, otherwise it forward or reply according to the flags setting on the PREQ frame. In the current given scenario, node G, node H, node C and node F received the PREQ. Node C, ignored the PREQ in case it received the PREQ frame anyhow. The next step is that the receiving mesh node like mesh nodes also forwarded PREQ frame to their neighboring node. The procedure was continued until the destination node reached and sent PREP. The new route built from source mesh node G to destination node will be in the given scenario is G− > A− > F− > I− > C. The data packet that was queued in the absence of congestion avoiding protocol, will now forward to destination node using this new established path. This mechanism reduces the packet lost which was taking place due to queue overflow. This protocol allows the data transmission on the alternate route instead to wait the positive signal of congestion to restart transmission on existing track.

3.1.1 Proposed Algorithm The proposed technique CA-HWMP which is basically a modification in original default protocol of IEEE 802.11s i.e HWMP. The procedure is that when a node have data to send to other node, it broadcast the PREQ to its immediate node. The receiving nodes forward the PREQ according to basic rules of the default protocol and additional to that they will check their queue level. If it is below to the defined value, it forwards PREQ to other nodes. Finally, when the PREQ source node receives PREP, the path will establish. Here is the modified algorithm of the proposed routing protocol for IEEE 802.11s;

23

(Variables) 1 : SourceNodeData : Boolean variable for source node data status, it is 1 for true and 0 for false 2 : QueueMax : Queue maximum value 3 : PREQ : Path Request 4 : PREP : Path Reply 5 : Path : one hop path 6 : T O : Target Only flag 7 : RF : Reply and Forward 8 : SequenceNum :PREQ message sequence number 9 : ownSequenceNum : sequence number saved at intermediate or destination nodeS (main Algorithm) 11 : I f (SourceNodeData == trueORQueueMax => 65%) 12 : Then Source node broadcast PREQ 13 : uponreceivingPREQi f (QueueMax =< 55%) 14 : Then ignore the PREQ message 15 : elsei f (SequenceNum > ownSequenceNum) 16 : Then update Path 17 : i f (NewPathcreatedorModi f ied) 18 : Then forward PREQ (Flags) 19 : T O = 1 : only destination send PREP 20 : T O = 0andRF = 0 : intermediate node send unicast reply to source with Path and does not forward PREQ 21 : T O = 0andRF = 1 : The first intermediate node with the Path sends reply to source, change TO=1 and forwards PREQ

24

3.2 Advantages and Limitation of CA-HWMP There are some limitations of our proposed technique CA-HWMP. Our proposed technique works well in a scenario where we have alternate ways to re-route traffic. Although there are 80% chances of availability of alternate routes. Nonetheless, in the absence of alternate path, our proposed technique should work not work well. This protocol we have advantages of alternate routes. It received packets from other nodes and queue them until CCNF expiry time reached. And so it starts transmission to a congested node in the network. A hybrid technique also be advised to solve this problem, which could cater both scenarios. Secondly, our protocol may have some scalability problem. It is the common practice, for route calculation few message exchanges between nodes in the network. If we have greater nodes in the network then this increase routing overhead. In a wireless network, with the increase of nodes in the network channel contention also increases and results in the increase in the wait time.

25

Chapter 4 SIMULATION AND PERFORMANCE EVALUATION This chapter describes the simulation setup used for the comparative analysis of two routing protocols of IEEE 802.11s i.e. HWMP and CA-HWMP. For the simulation, we have selected NS-3 which is an open source network simulator. It is based on object oriented language C++ and scripting language Python. For IEEE 802.11s simulation, we have used NS-3.14 which has a built-in support for IEEE 802.11s module. For IEEE 802.11s simulation tracing, we have used flow monitor module, because of the bug in NS-3.14 for generating trace file for IEEE 802.11s.

4.1 CA-HWMP Patch for NS3 For the performance evaluation of IEEE 802.11s, NS3 provides its module implementation and functionality support. As this simulator is open source and provides the flexibility of implementation of new protocols into it. By taking advantages of it and to evaluate our proposed protocol CA-HWMP, we patched it successfully into mesh module of NS3 using C++. The CA-HWMP algorithm discussed in Chapter 3 is used for patch writing.

26

4.2 Simulation Parameter Table 4.1 describes the general simulation parameter of the scenarios of IEEE 802.11s. The simulation is performed on the Linux Distribution Fedora Core 13, the simulation tool used for our mesh scenario is NS-3.14. NS 3 have the built in support of IEEE 802.11s and provide the flexibility of the modification in modules. The application for our scenario is used On-off application (CBR application) which transmit data at a constant bit rate. Here we use its data rate varies from 100Kbps to 350Kbps on UDP transport layer protocol. For nodes topology we use nodes in grids where the number of nodes increase in both dimensions i.e. m × n where m represents the number of nodes on X-axis while n represents the number of nodes in Y-axis where separation between nodes is 170m. Table 4.1: General Simulation Parameter Operating System Linux Distribution Fedora Core 13 NS 3 version NS-3.14 Wifi Standard IEEE 802.11s Mobility Model Constant Position Mobility Model Number of Interface 1 RTS/CTS Disable Trace Module Flow Monitor Traffic Flows Constant-bit rate (CBR) Flows Varies 100Kbps, 150Kbps, 200Kbps, 250Kbps, 300Kbps, 350Kbps Packet Size 1024KB Transport Layer Protocol UDP Routing Protocols at MAC HWMP, CA-HWMP Number of Nodes in Grid 4, 9, 16, 25, 36, 49, 64 Transmission Range 170m Simulation Time 240sec

27

4.3 Evaluating Parameter For analysis, we are using three metrics: throughput, Packet Delivery Fraction (PDF), average end to end delay. These three parameters are most affected due to congestion.

4.3.1 Throughput The numbers of bits that are successfully transmitted from source to destination are considered as throughput. We calculate it at the application layer, by applying appropriate filters in trace file. The throughput formula is given in Equation 4.1. T hroughput(kbps) = (NrecvBytes ∗ 8)/(Nstime ∗ 1000)

(4.1)

NrecvBytes = Total number of bytes received Nstime = Total simulation time

4.3.2 Average End to End Delay Delay is a time calculated as the difference between packet receiving time and the packet transmitting time. Average end to end delay is calculated for the all scenarios. It is calculated at receiving end using Equation 4.2 and 4.3. AverageEndtoEndDelay = Tdelay /TPktR

(4.2)

n

Tdelay = ∑ Delay = Trecv − Tsent i=1

Tdelay = Total delay TPktR = Total number of packets received Trecv = Packet receiving time Tsent = Packet sending time n = Total Number of Received Packets 28

(4.3)

4.3.3 Packet Delivery Fraction (PDF) To observe the network behavior closely, the packet delivery fraction is calculated, as we are dealing with the congestion avoidance mechanism. In case of congestion, PDF drops, but by applying any congestion avoidance or congestion control mechanism this PDF value should be greater. PDF was calculated by observing the total number of transmitting packets and total number of receiving packets in the network, and calculate the percentage value of packet delivery using formula given in Equation 4.4. PDF = (NRec /NTr ) ∗ 100

(4.4)

Where NRec = Total number of Receiving packets NTr = Total number of transmitted packets

4.4 Simulation Scenarios and Parameters For comparative analysis, we have selected the following simulation scenarios and parameters ˆ Effect of Application Data Rate on Throughput ˆ Effect of Application Data Rate on Packet Delivery Fraction (PDF) ˆ Effect of Application Data Rate on End-to-End Delay

We have considered multiple simulation scenarios for mesh where provide the chances of alternate route availability variation. We choose the grid topology

29

Fig. 4.1: m × n Mesh Topology

(m × n) in which each node in the topology acts also as relay station i.e. MP. In the grid topology m indicates the number of the nodes in the X-axis and n indicates the number of nodes in Y-axis as shown in Figure 4.1. Initially, we use 2 × 2 grid then increase value of m and n additive.

4.4.1 Effect of Application Data Rate on Throughput In our scenario mesh grid topology is taken into account, where 50% nodes of the network generate traffic in the network. The choice of the source node and destination node is taken run time to make the scenario more realistic as happened in the network, only nodes which have data to send, try to access the channel. It is not necessary all the nodes in the network send data all the time. Here we restrict our traffic flows up to 50% so that we can observe the benefits of the alternate path available in network. To observe the effect of application data rate of throughput of the network, we run CBR traffic varies from 100Kbps to 350Kbps, node grid 30

varies from 4 to 64 nodes and device maximum data rate limit is 350Kbps. We calculate the average throughput of the network. Graphs from 4.2 to 4.7 represent effect on throughput of the network at different node density and data rate. In the graph, the variation nodes in the grid are shown on X-axis where mesh grid varies from 2 × 2 to 8 × 8, while variation in throughput shown on Y-axis. Consider 4.2, where network behavior is shown in a scenario of 100Kbps application data rate while moving from sparse mode to dense mode. Considering Figure 4.1 for network topology, when the mesh grid has 4 nodes. In this scenario all nodes are in the direct access of each other and can send data directly. When we increase the grid size from 2 × 2 to 3 × 3, both protocol behaviors are also same again. The reason behind is that when we have 9 node grid (3 nodes on x-axis and 3 nodes on y-axis), node-1 sends data to node-9 in the network; the intermediate node receives data from the source node and forwards to the destination node. In this topology, for best path, maximum one hop involved between source and destination node. As the device data rate is also greater than the application data rate, intermediate nodes can easily forward data on behalf of more than 3 neighbor nodes but when more than 3 nodes send data using intermediate nodes then intermediate nodes queued the excessive packets. This condition later leads to congestion. Though, there are less chances of congestion on intermediate nodes, however, throughput observed in case CA-HWMP is little better than HWMP due to use of alternate path. By increasing the number of nodes from 9 to 16, we get more option of alternate paths. Here to send rate of the device is greater than the application data rate; however intermediate nodes can be congested as they are getting data from multiple nodes to forward. However, at 100Kbps we observed very little improvement as data rate is low and queued data remained mostly below the threshold value. When we increase the number of nodes in grid from

31

25, 36, 49 and 64, there are more nodes sending data in the network which also increase throughput of the network. The throughput of a network increases while moving from sparse to dense mode of the network, as more nodes are available to send/ share date with one another. HWMP graph slop shows that by increasing the number of nodes in the grid, throughput also increases due to increase in the probability of sending nodes in the network. Graph lines in Figure 4.2 show that CA-HWMP performs better than the HWMP as CA-HWMP chooses a second best path in case of congestion on intermediate nodes. Now consider another scenario to observe the effect of application data rate of

Fig. 4.2: Throughput Comparison of HWMP and CA-HWMP at 100Kbps

throughput where we change the application data rate from 100Kbps to 150Kbps. The number of nodes in grid varies from 4 to 64 nodes as shown in Figure 4.3. When there are 4 nodes in the network, all nodes are in the direct access, so all nodes can directly communicate each other. When the number of nodes increases from 4 to 9 in network, there is still less opportunity of alternate paths, therefore

32

throughput observed in using both routing protocol is same. When the number of nodes increases to 16 in grid while using HWMP, the more intermediate nodes are available in forwarding data. This phenomena depend upon the source and destination node in the network. This time, as application data rate is greater than the previous scenario; intermediate nodes have the margin of forwarding data of the two nodes at the same time. If intermediate nodes receive data from more than 2 nodes to forward then it will queue packets to forward. When queue becomes full, it drops packets from the queue. By increasing the number of nodes in the grid, this throughput degradation behavior increases while using HWMP. In case of 36 nodes in a grid, as the source and destination nodes are chosen run time, number of hops increases and due to multiple flows on intermediate nodes result in packets dropped from the queue. This packet drop ratio increase when more nodes enter in network to communicate. In case of 49 nodes in grid, throughput degradation effect increases. As we move from sparse mode to dense node, new entering nodes in the network generate more data to send/share with other nodes in the network. However there are some negative effect. By this movement of sparse to dense network there is a greater exchange of control messages to manage the network. There are also greater contentions for channel access which increases the chances of data collision, re-transmissions. As intermediate nodes forward data on the behalf of neighbor nodes and get congested due to greater data arrival rate and less forwarding rate. In this scenario packets drop from queue. When we use CA-HWMP instead of HWMP, as graphs line shows in Figure 4.3, we observed greater throughput as compared to HWMP because CA-HWMP. This is because CA-HWMP takes advantage of alternate paths when an intermediate node in already selected path get congested. In case of 64 nodes in a grid, this gain is greater as greater alternate paths are available, while in case of 4 and 9 nodes its

33

gain is zero due to two reasons. First, no alternate path available, and secondly there are less chances of the packet lost as almost all nodes can communicate directly or maximum one hop involved to forward data to the destination. Considering the Figure 4.4, where graph shows effect on network throughput

Fig. 4.3: Throughput Comparison of HWMP and CA-HWMP at 150Kbps

when data rate increases up to 200Kbps and number of nodes varies from 4 to 64. In this scenario, we can observe throughput degradation visibly when we use default routing protocol i.e. HWMP. In case of 4 and 9 nodes grid, there is no such effect. But as long as the number of nodes varies from 16 to 49 throughput degraded due to involvement of intermediate nodes. In case of 200Kbps data rate, as a node can transmit maximum 350Kbps data, if an intermediate node receives data of 2 nodes at the same time, it forward the data up to maximum sending rate and queue the remaining data. But when the queue becomes full, it drops packets from its queue which results in degradation of throughput of the network. Results show that in case of a greater number of nodes, and with the

34

increase of intermediate hops throughput degraded due to greater exchange of control messages, collisions, and packets dropped from the queue. Both protocol show this behavior of degradation, however, CA-HWMP shows the 13-18% gain in throughput depending upon the traffic generated in the network. In the next step, we increase data rate to 250Kbps. The effect of data rate of

Fig. 4.4: Throughput Comparison of HWMP and CA-HWMP at 200Kbps

throughput while we increase the number of nodes i.e. 4, 9, 16, 25, 36, 49 and 64 is shown in Figure 4.5. When we have 4 and 9 nodes in the network, we observed similar behavior of both routing protocols. But in case of a grid of 9 nodes, there is only a single node is at the junction of all remaining nodes. In this scenario, an application sending data rate is 250Kbps and node maximum sending data rate is 350Kbps. Therefore, the only node present in the junction of all nodes, when receives data from multiple nodes, initially it queues data and then eventually drop it from the queue. This results in degradation of throughput of the network. As there is less availability of alternate path and least number of nodes in the grid,

35

CA-HWMP also performs badly. By increasing the number of nodes in grid from 9 to 16, more data discriminate in the network. However, there is also a chance of congestion at intermediate nodes which degrades throughput while using HWMP. This throughput degradation increases when more nodes enter in the network. We observed almost constant change in throughput although we have more nodes to send data in network as we increase the number of nodes in the grid. When we use CA-HWMP in the same scenario with the data rate of 250Kbps and number of nodes varies from 16 to 64, we observe better performance of it as compared to default routing protocol. This improvement is due to availability of alternate. This protocol re-routes traffic to alternate path if already chosen path is blocked due to congestion. CA-HWMP graph line shows that when we increase the number of nodes in a grid from 49 to 64, this improvement is almost equal to as we achieves in 49 node grid. This shows that benefits of alternate path have some limitation. Although an increase in the number of nodes in the grid also increases the availability of alternate path, but this increase in the number of nodes also increases the number of control messages, channel contentions and collisions. Consider Figure 4.6, where graphs show the behavior of the throughput of the network when the application date rate is 300Kbps. HWMP performance is good in case of the 4 node grid. We increase the number of nodes in grid from 4 to 9. Throughput degrades, the main reason is that there is a single node at the junction of all nodes. As there is 300Kbps application data rate and 350Kbps device forwarding data rate, therefore when multiple nodes forward data using this intermediate node, congestion occurs and throughput degrades. As there are less alternate paths available, hence there is no advantage of using CA-HWMP. Both protocols perform almost likewise. When we increase the number of nodes from 9 to 16, we get gain in throughput while using CA-HWMP. By increasing the number

36

Fig. 4.5: Throughput Comparison of HWMP and CA-HWMP at 250Kbps

of nodes in the network, more nodes are available to share data on the network and more chances of availability of alternate paths. This gain in the throughput increases when we increase the number of nodes from 16 to 25, 36, 49. But when the number of nodes increases from 36 to 49, performance difference between both protocols is most significant. CA-HWMP takes the advantage of alternate path to improve throughput. If we consider all other factors constant, HWMP performance degrades due to greater traffic load at intermediate nodes, while the CA-HWMP performs better due to availability of alternate paths. CA-HWMP performs its best with 49 node grid. CA-HWMP graph line shows that when we increase the number of nodes in grid from 49 to 64, this gain in the throughput reduces as compared to the 49 node grid. There are some limitations as we can get the benefits of alternate path for a limited number of nodes. Although by increasing the number of nodes in the grid, there is an increase in the availability of alternate paths, but this addition in nodes also adds to the number of control messages to manage network, channel contentions and re-transmissions. 37

To observe the effect of application data rate of throughput we set application

Fig. 4.6: Throughput Comparison of HWMP and CA-HWMP at 300Kbps

data rate equal to the sending data rate of the device. We restrict application to send data at the rate of 350Kbps where the device sending data rate is also 350Kbps. In this scenario when the number of nodes in the grid varies from 4 to 9, both routing protocols perform same as we noticed in previous scenarios. When the number of nodes in grid vary from 9 to 49, CA-HWMP performs better than HWMP. This gain increases with the increase of the number of nodes in the grid. In the this scenario, CA-HWMP gets the benefits of alternate paths, and re-route traffic on second optimal path when congestion occurs at intermediate nodes. We increase the number of nodes from 49 to 64 in the grid, throughput degrades while using HWMP protocol because due to the greater number of nodes, number of hops increases between source and destination. Therefore the intermediate nodes that forward multiple node’s data, drop packets from queues due to greater arrival data rate as compared to forwarding data rate. CA-HWMP although is performed

38

better but its performance is also affected by other factors like greater exchange of control messages, routing protocol messages and channel contention. Therefore, the CA-HWMP graph line in Figure 4.7, shows decline in throughput when we increase the number of nodes in grid 49 to 64. In 49 node grid, CA-HWMP performance is most significant. To conclude the effect of application data rate of throughput of the network, it

Fig. 4.7: Throughput Comparison of HWMP and CA-HWMP at 350Kbps

is observed that by increasing an application data rate throughput of the network also increases. This increase in throughput is limited to specific data rate and number of nodes in the network. At the 300Kbps data rate maximum throughput is observed when the number of nodes is 49 in the grid. All graphs show this trend that CA-HWMP performs better when the number of nodes in network are between 25 to 64. We observed the maximum gain at a 300Kbps data rate in the network of 49 node grid. This shows the limitation of the proposed routing protocol for congestion avoidance. In the presence of alternate path its performance is

39

better but after a limited number of nodes its performance degrades.

4.4.2 Effect of Application Data Rate on Packet Delivery Fraction(PDF) To observe the results very closely, we calculated packet delivery fraction to examine the network behavior on increasing data rate from the expected behavior of the network. The packet delivery fraction is calculated to check percentage receiving data at receiving nodes. Figure 4.8 to Figure 4.13 show the PDF of the protocols HWMP and CA-HWMP. The values on X-axis indicate the number of nodes in the grid while values on the Y-Axis indicate PDF. From Table 4.2 to Table 4.7 show PDF value for HWMP and CA-HWMP when application data rate varies from 100Kbps to 350 Kbps for number of nodes varies from 4 to 64. This scenario is important because when we move from sparse to dense node network, more nodes content for channel access which increase the chances of a collision, and packet lost. Through PDF we can monitor the actual gain in proposed technique. When the number of nodes increases in the grid, the PDF decrease due to increase of intermediate hops, node contention increases for channel access and also increase in control messages. In spite of all other factors, when the number of flows increases in the network, PDF decreases on intermediate nodes i.e. MPs PDF decreases due to packet drop from queue as no congestion mechanism available in HWMP. Considering a scenario where application data rate is 100Kbps and number of nodes in grid varies from 4 to 64. Table 4.2 shows values of the PDF when application rate is 100Kbps while moving from a sparse mode to a dense mode of the network. There is a minor difference in PDF value for HWMP and CA-HWMP when the number of nodes in the grid is 4 and 9. In case of 4 nodes all nodes are in

40

Table 4.2: PDF when Data rate is 100Kbps No. of Nodes HWMP CA-HWMP 4 98.43 98.5 9 98.57 98.32 16 82.59 94.65 25 75.11 89.20 36 68.03 83.32 49 58.28 77.21 64 52.57 65.41

the direct access of each other while in case of 9 nodes there is only one node at the junction and due to less application data rate queue level remain below the threshold value of the queue for congestion notification. Here also nodes data sending rate is more than 3 folds of application data rate, therefore node at the junction can forwards data on behalf of 3 neighbor nodes. In case of 4 × 4 Grid, the PDF ratio drops because of the increment in the number of nodes in the grid, number of hops, channel contentions, collisions and re-transmissions. This degradation is more observed when we keep on increasing the number of nodes in network while keeping the data rate constant. In case of HWMP routing protocol, the possible best path from source to destination is through the nodes at the junction of edge nodes. Mostly traffic pass though these nodes, which create a bottleneck at intermediate nodes, at last when congestion occurs packets drop from the queue. Considering the same scenario, but in this case we use CA-HWMP routing protocol instead of HWMP. When congestion occurs at intermediate nodes due to traffic from multiple nodes, it re-routes traffic on the second optimum path to balance the traffic load which leads the improvement of packet delivery fraction. Figure 4.8 shows the discussed behavior of both protocols when data rate is 100Kbps and device sending rate is 350Kbps. The number of nodes in the grid varies from 4 to 64. Both protocols graphs lines show that when the number of

41

nodes increases in network, PDF decreases. However, when we use CA-HWMP instead of HWMP, we observe gain in packet delivery fraction. Consider a scenario whose results are shown in Figure 4.9 and Table 4.3. In this

Fig. 4.8: PDF Comparison of HWMP and CA-HWMP at 100Kbps

scenario topology is same as explained in the previous scenario. Keeping all other parameters constant, by increasing data rate from 100 to 150Kbps and the number of nodes from 4 and 9 in the grid, there is no significant difference observed between both protocols. This is due to none or less availability of alternate paths. But when the number of nodes increases to 16 in a network with 50% number of flows, more alternate paths are available as compared to 4 and 9 node grid. Using HWM, PDF decreases due to congestion on intermediate nodes while using CA-HWMP, PDF value increases as compared to HWMP due to re-route traffic on the alternate path. By keeping same data rates, when we increase the number of nodes 25 in network, though more nodes are available to disseminate data in a

42

network. Comparatively, there are multiple paths available for the same destination. Therefore, the behavior of the CA-HWMP protocol is good as compared to HWMP. Statistics show that 15% packets drop from queue due to congestion and 12% packet loss due to other wireless reasons. When we use CA-HWMP routing protocol instead of HWMP, the PDF increases 14%. When we increase the number of nodes in the network to 36, PDF decreases as compared to 25 nodes of the network and this behavior is more observed when we keep on increasing the number of nodes in the grid. When we have 64 nodes in the network, PDF decreases 50%. The reason behind this degradation is that packet drop from the queue of intermediate nodes due to greater receiving rate and less forwarding rate. The queue is not only main reason of packet drops, packets also drop due to some wireless reasons. CA-HWMP improves the PDF value in all scenarios where the number of nodes varies from 16 to 64 depending upon the availability of alternate paths. When we use CA-HWMP routing protocol instead of HWMP, there is a Table 4.3: PDF when Data rate is 150Kbps No. of Nodes HWMP CA-HWMP 4 98.39075 99.95 9 98.476 97.7 16 80.49825 91.83 25 73.07125 87.03 36 64.1695 80.24 49 57.27275 72.85 64 51.2876 62.02

gain in the PDF. This gain is only for those packets which drops due to congestion at intermediate nodes. But overall PDF decreases with the increasing the number of nodes in the network as this increase leads to channel contention, exchange of more control messages to manage the network, and increase in number of hops between the source and destination. 43

Now consider another scenario shown in Figure 4.10 and Table 4.4 where we

Fig. 4.9: PDF Comparison of HWMP and CA-HWMP at 150Kbps

increase application data rate from 150Kbps to 200Kbps, and network grid topology varies from 4 to 64 nodes. By keeping all other parameters constant and data rate 200Kbps. Consider a case of network grid topology of 4 and 9 nodes, the PDF value in both routing protocols is almost same. When we increase the number of nodes in network from 9 to 16 and use HWMP routing protocol, PDF value decreases to 76%. The reason behind this decrease in the value of PDF is that when we increase the number of nodes in the network, the number of hops also increases between source and destination (depending upon the communicating nodes). The intermediate nodes that forward data on behalf of other nodes, when receive data from multiple nodes they get congested and drop packets from the queue. However, congestion is not only the reason for the PDF decline. When we increase the number of nodes in the network, there are more nodes available

44

to share data in the network, so there are more chances of collision of data in the network. Increase in number of nodes also increases the ratio of control messages to manage networks. All these problems leads together in decline of PDF in the network. Statistics show that in case of the 16 node grid, there is almost 15-17% packet lost due to congestion while remaining packet lost is due to some other wireless reasons. When number of nodes in the grid varies from 25 to 64, the decline in PDF increases due to same reasons as discussed in above scenario. Now consider CA-HWMP routing protocol in the same scenario, we observe sigTable 4.4: PDF when Data rate is 200Kbps No. of Nodes HWMP CA-HWMP 4 98.86435 98.8 9 98.6105 98.91 16 78.212 93.51 25 71.38985 89.02 36 63.25845 82.91 49 53.1844 73.021 64 47.60315 60.04

nificant improvement in PDF value. Although, PDF decreases with an increase in the number of nodes in the network, however this decline is less as compared to HWMP routing protocol. The main reason is that, when the intermediate node queue level reaches its threshold value, instead of waiting, this protocol re-route traffic on alternate paths. When the number of nodes in the network varies from 16 to 64, PDF gain increases depending upon the availability of alternate paths. However Graph line in Figure 4.10 shows that CA-HWMP performs better when the number of nodes varies from 16 to 64, where it performs best in a 49 node network. Its performance again declines in 64 nodes due to other wireless reasons like collision due to channel contention, greater network control messages and retransmissions. 45

Now consider the same scenario where the number of nodes varies from 4 to

Fig. 4.10: PDF Comparison of HWMP and CA-HWMP at 200Kbps

64 and an application data rate of 250Kbps with 50% number of flows in the network. When we have 4 nodes in the network, both routing protocols behave similar because all nodes in the network can communicate directly. In case, when the number of nodes in the network is 9, there is only one at the junction through which mostly nodes forward data due to best path. Here the application data rate is 250Kbps and node data sending rate is 350Kbps, therefore intermediate node can easily forward one or two nodes data as relay nodes. In this scenario, when multiple nodes send data using intermediate nodes, they get congested and mostly packet lost from the queue. CA-HWMP also performs similar to HWMP in this scenario due to none or less availability of alternate paths. If we increase the number of nodes in network from 9 to 16 PDF value decreases to 76%, and this decrements increases with the increase in the number of nodes in the network.

46

However, when we use CA-HWMP in the same scenario, it performs better than HWMP due to more availability of alternate path with the increase of the number of nodes as shown in Table 4 and Figure 4.11. CA-HWMP graph line shows that PDF gain increases with the increase in the Table 4.5: PDF when Data rate is at 250Kbps No. of Nodes HWMP CA-HWMP 4 97.89525 98.75 9 91.8861 90.82 16 76.99835 92.431 25 68.94635 85.976 36 59.44285 76.342 49 46.8878 66.912 64 40.7997 56.432

number of nodes. When we have 64 nodes in the network, this gain decreases. With the increase in the number of nodes in the network, there is also an intensification in node interference, contention, collision and control messages. These factors altogether lead to throughput degradation. CA-HWMP and HWMP graph lines show that CA-HWMP performs better. This PDF gain increases with the increase in the number of nodes but after the limited number of nodes it decreases with the increase in the number of nodes in the mesh grid. Consider a scenario where an application data rate is 300Kbps and the number of nodes in the network varies from 4 to 64. In case of 4 nodes in the network all nodes communicate directly with each other without involvement of any intermediate nodes. Considering HWMP routing protocol, when we increase the number of nodes in network from 4 to 9, the PDF value decreases due to involvement of intermediate hops. Considering CA-HWMP in the same scenario, as there is also none or less availability of alternate paths, therefore CA-HWMP does not perform better than HWMP. But when we add more nodes in the network i.e. 16, 25, 36, 47

Fig. 4.11: PDF Comparison of HWMP and CA-HWMP at 250Kbps

49 and 64 nodes in the network, PDF decreased in case of both routing protocols. However, CA-HWMP performs better than HWMP. PDF decreases with the increase of the number of nodes due to increase in node interference, contention and greater chances of collision. This decrease in PDF is also due to congestion on intermediate nodes, as every node in a mesh network acts also as relay node. Intermediate nodes receive data from multiple nodes and forward them to other nodes and when a node sending data rate is less than the receiving data rate, it queues the remaining packet till queue becomes full, and at the end packet drop from the queue. CA-HWMP performs better because when the queue at intermediate nodes reaches at threshold value, it re-route traffic to an alternate path. Therefore, we observe PDF gain increases in CA-HWMP with an increase in the number of nodes. In Figure 4.12, X-axis represents the number of nodes in the grid while the Y-axis represents the number of nodes on Y-axis. Graph lines show that by increasing the number of nodes, PDF decreases in cases of both routing protocols. In case we 48

Table 4.6: PDF when Data rate is 300Kbps No. of Nodes HWMP CA-HWMP 4 98.41365 98.8858 9 92.3085 91.003 16 70.6014 85.34 25 63.20095 82.389 36 52.4368 74.231 49 38.00085 60.34 64 34.55055 48.012

use HWMP, this decrease in PDF is greater, but when we use CA-HWMP instead of HWMP this decrease is less. Now consider the same scenario where the number of nodes varies from 4 to 64

Fig. 4.12: PDF Comparison of HWMP and CA-HWMP at 300Kbps

nodes, but we increase application data rate from 300Kbps to 350Kbps. In case of 4 and 9 node grid, both protocols behave same. But when we increase the number of nodes from 9 to 16 in the grid, PDF decreases due to greater data rate. 49

Intermediate nodes receive data from multiple nodes. Here is the situation when intermediate nodes received data greater than its maximum limit to send. Therefore, the queue becomes full and at last packet drops due to greater arrival rate. If we keep on increasing the number of nodes in the network, the packet delivery fraction decreases. However, as discussed in the previous scenarios, PDF decrease due to many reasons. For example congestion at intermediate nodes when multiple nodes have to share data using the same path, the greater number of nodes also increases contention, so the number of collisions also increases. All these factors altogether lead to packet lost in the network. In Figure 4.13, X-axis represents the number of nodes and Y-axis represents Table 4.7: PDF when Data rate is 350Kbps No. of Nodes HWMP CA-HWMP 4 95.0552 95.23 9 89.9315 91.02 16 68.5411 80.432 25 59.42485 75.432 36 45.7396 65.387 49 33.41485 55.329 64 27.1037 38.32

PDF (%). Graph lines of both protocols show that when we increase the number of nodes in the network PDF decreases, but both graph lines show that this decrements are greater in case of HWMP and less in CA-HWMP. To be concluded, the PDF values depend on an application data rate and maximum limit of node to send a data. If the application generates the number of packets greater than the device maximum limit of sending packets than the disproportionate packet drop at the application layer. When an application generates the number of packets equal to or less device sending data rate then no packet lost at the application layer. In this scenario there are greater chances of recep50

Fig. 4.13: PDF Comparison of HWMP and CA-HWMP at 350Kbps

tions of all packets at destination node when single hop involved between source and destination. When an intermediate node receives data from multiple nodes at the same data rate to forward other nodes, then packets drop from queue due to inordinate coming rate. In our scenario, as source and destination nodes in the network are chosen randomly, so in case if intermediate MPs need to forward data for multiple nodes and receiving data rate is greater than the sending data rate then MPs queue all excessive packets. When the queue level reaches the maximum queue size then packets drop from the queue, which result in a decrement of PDF value in the network. The PDF values decrease when we increase application data rate, and number of nodes in the network. For the same specific scenario, CAHWMP behaves different from HWMP. When the queue size of the intermediate MP reaches defined threshold value, it re-routes traffic on the 2nd best possible 51

path. Therefore, PDF improves in CA-HWMP. From Figure 4.8 to Figure 4.13 show the difference between both protocols. In case of HWMP, when we increase data rate from 100Kps to 150Kbps, 200Kbps, 250Kbps, 300Kbps and 350Kbps, the MPs that forward data on behalf of other mesh nodes when congested, they drop packets from their queues. When we use CA-HWMP instead of HWMP, we get significant PDF gain due re-route.

4.4.3 Effect of Application Data Rate on End-to-End Delay End to end delay is also an important parameter. To check the effect of application data rate on delay while moving from the sparse network to dense network, we calculated delay (s) for some scenarios where data rate varies from 100Kbps to 350Kbps and number of nodes varies from 4 to 64 in the network and all other parameters are defined in Table 4.1. Considering the network topology define in grid m × n dimensions as given in Figure 4.1 where m number of nodes are given on X-Axis and n number of nodes are given on Y-Axis. In our scenario n values are 1, 2, 3, 4, 5,6 ,7 and 8 and m values are also 1, 2, 3, 4, 5, 6, 7, and 8, there grid varies from 1 × 1 to 8 × 8. Consider a scenario where application data rate is 100Kbps and number of nodes varies from 4 to 64 in the network. In case of 4 nodes in the network, all nodes are in direct access of each other therefore observed delay is very less. When we increase number of nodes in the network, delay increases in both cases due to increase in intermediate hops, channel contention, collisions and re-transmissions as shown in Figure 4.14. By keeping all parameters, when we increase data rate from 100Kbps to 150Kbps, delay increases by increasing number of nodes from 4 to 64 in the network. Figure 4.15 shows delay for HWMP and CA-HWMP protocol, where both bar lines indicate that delay increases by increasing number of nodes in the network; however, 52

Fig. 4.14: Delay Comparison of HWMP and CA-HWMP at 100Kbps

there is an almost negligible difference in delay observed for both routing protocols. Initially, when numbers of nodes are lesser in the network, delay observed was less too, but when number of nodes increase 49 to 64, the observed delay is greater than the delay observed when an application data rate is 100Kbps. The reason behind this increase in delay is the increase in number of hops. Intermediate nodes as act as relay nodes and forward data on the behalf of other neighboring nodes. These nodes queue packets, when received data from multiple nodes to forward, if sending data rate is lesser than the receiving data rate, and forward when has channel access to send data. Packets remain in queue which increase delay because by increasing number of nodes, there is also increment in the number of hops which added additional delay in the network. After observing delay when we have application data rate 150 Kbps, we increase data rate from 150Kbps to 200Kbps. By increasing the number of nodes in the

53

Fig. 4.15: Delay Comparison of HWMP and CA-HWMP at 150Kbps

network, delay increase due to increase in contention, number of hops. Figure 4.16 shows that observed delay in both protocols is same or negligible greater delay in the case of CA-HWMP. When we have 49, and 64 nodes in the network, packet lost ratio increases due to contention, collisions, re-transmissions and increase in number of hops, therefore, delay observed in HWMP is less. CA-HWMP performs better, and increase packet delivery fraction, as delay calculated on receiving data packets. Therefore greater delay observed in CA-HWMP, however this delay is negligible. Now consider a scenario where application data rate is 250Kbps and number of nodes varies from 4 to 64 in the network. By keeping all other parameter constant, observed delay is greater than all previous scenarios. This delay is greater in case of greater number of nodes as when we increase number of nodes in the network, more nodes available to share data which increase channel contention, also by increasing number of nodes, there is increase in number of hops where 54

Fig. 4.16: Delay Comparison of HWMP and CA-HWMP at 200Kbps

channel contention on each hop add delay. Figure 4.17 shows the chart of delay increases when we increase the number of nodes in the network. As discussed in PDF section, packet delivery fraction of CA-HWMP is greater than HWMP; therefore delay is calculated on received packet. Here we observed, when we have 49 and 64 number of nodes in the network, we observe greater delay in case of CAHWMP and less delay in case of HWMP. When we use HWMP, delay added on the intermediate node, and increases with the increase in number of hops between source and destination node. In case of CA-HWMP, when intermediate node get congested, it re-routes and then forward traffic on alternate path. CA-HWMP chooses second optimum path, which may add delay but this increase in delay is very small. Now consider a scenario where the number of nodes in the network varies from 4 to 64 and application data rate is 300Kbps. When the number of nodes increases, more delay introduces in the network. The main reason for this increase in delay

55

Fig. 4.17: Delay Comparison of HWMP and CA-HWMP at 250Kbps

is increased in contention, number of hops, collision and re-transmission with the increase in the number of nodes in the network. In Figure 4.18, bar lines of both protocols indicate insignificant differences in delay. However when the number of nodes in the grid is 49 and 64, delay increases using both protocols, but the delay value in the case of CA-HWMP is greater than HWMP. Delay is computed on the receiving packets at the destination, in HWMP the received packets are less as compared to CA-HWMP. Therefore, less delay is observed in HWMP. This difference in delay is much smaller that can be ignored. The Figure 4.19 represents the effect of an application data rate on the delay while varying number of nodes in the grid. In the given chart, bar lines on both graphs show insignificant difference of the delay between two compared protocols. In case of CA-HWMP, the received packets are greater than the number of receiving packets while using HWMP. End to end delay computed on the received

56

Fig. 4.18: Delay Comparison of HWMP and CA-HWMP at 300Kbps

packet at the destination using the ratio of total delay and total packet received. The end to end delay is calculated using the time difference of packet receiving time and packet sending time. In our proposed technique, we observed greater values of PDF with ignorant delay. From Figure 4.14 to 4.19 we come to a conclusion that delay increases by increasing the number of nodes in the grid. When new nodes enters into the network, throughput of the network also increases because more nodes available to disseminate data into the network. This increase in the number of nodes in the mesh grid, also increase channel contention. The increase in the wait time of channel access added daily. We fixed all other parameters in both protocol scenarios, and observed that when we increase the number of nodes in the grid, the delay also increases due to delay in channel access. Packet remained in the queue until node gets a chance to transmit packets. Same is the case, when we increase an applica-

57

Fig. 4.19: Delay Comparison of HWMP and CA-HWMP at 350Kbps

tion data rate, delay also increases because at every node processing time increases and with the increase of the disproportionate arrival rate, packets remain in queue to transmit or dropped from the queue. Transmission attempt also increases the delay. In a given scenario, the delay difference between both protocols is ignorant.

58

Chapter 5 Conclusions and Future Work This Chapter concludes research work and discusses the recommendations for future works.

5.1 Conclusions Use of mobile wireless for the Internet is increasing at a rapid pace. Moreover, internet traffic pattern shows that users are more interested in the interactive applications. These applications, such as gaming and video chatting, require high bandwidth. The mobile wireless technology is being used in numerous applications due to its low cost and easy deployment. Sensor networks, security & safety applications as well as military applications are generally deployed using mobile wireless networks. Among the evolving technologies, WMN is a wireless technology which provides high bandwidth and caters mobile users.

WMN has the capabilities of self-

configuring, self-healing and self-organizing. WMN is a suitable candidate for network provisioning in the areas where connectivity through wired media is comparatively difficult or lengthy process. Resultantly, there are many scenarios which are efficiently satisfied by WMN and cannot be fully supported by other wireless technologies. IEEE 802.11s is the first MAC standard being proposed for WMN. This standard proposes MAC enhancement in IEEE 802.11 MAC for WMN. It includes PHY layer of one of the IEEE 802.11 variants a, b, g, or n with enhancement of QoS,

59

path selection, security, mesh configuration, routing and management. This standard includes a routing protocol at the MAC layer. HWMP is the mandatory protocol proposed by IEEE 802.11s, which offers the advantages of both reactive and proactive approaches. In wireless scenario, packet loss is generally caused due to wireless communication issues, such as congestion or contention, etc. In IEEE 802.11 based wireless networks, but one node can transmit the data at a time, due to shared channel characteristics. This restriction adds a significant delay with the increment of the number of hops. The increase in packet delivery delay and queue length leads to congestion among wireless nodes. In a WMN, in case of the greater traffic load, traffic is aggregated at the portal nodes. Resultantly, in the absence of any congestion control mechanism, nodes at the outer edges of the network undergo low throughput and increased packet loss. To solve this problem many researchers have proposed algorithms like Total Congestion Control (TCC), Link Selective Congestion Control (LSCC) and Path Selective Congestion Control (PSCC) but every technique have their own pros and cons. In TCC, on receiving CCNF, STA stop sending data to all neighboring nodes although STA can send data to other neighboring nodes. This approach wastes the available bandwidth and add delay. In PSCC, on receiving CCNF the STA only block sending data for the specific link but can receive data. In this scenario, when congestion occurs in the network, STA when it gets congested. It cannot broadcast CCNF message to blocked link. Hence packets drop due to queue overflow. The third is PSCC, which block the specific path on receiving CCNF. For the announcement of specific destination, this algorithm requires modification in the standard CCNF. On receiving modified CCNF a node only block sending data for a specific destination, but it continuously receives data for that specific client.

60

The scenario becomes more complicated when CCNF frame is further broadcasted to immediate node in a continuous chain. To handle congestion at the MAC layer, we proposed a congestion avoidance technique named Congestion Avoidance Hybrid Wireless Mesh Protocol (CA-HWMP). In this protocol, when node queue level reached to a specified threshold value, it broadcasts CCNF to its immediate neighbors. The nodes present in its neighbor re-route all traffic on congested node from alternate path. For comparison, we have selected our proposed approach using IEEE 802.11s WMN with its mandatory routing protocol i.e. HWMP. For performance evaluation, we used NS3 which is based on object oriented language C++ and a scripting language Python. We evaluated our proposed protocol through PDF and average end-to-end delay. We also noticed this effect on the different node grids by gradually varying the environment from sparse to dense mode. From the comparison we concluded that CA-HWMP performs better than HWMP in term of greater throughput and PDF. However, CA-HWMP offers negligibly higher delay than HWMP. The increased delay is caused due to selection of alternate path, which may not be the optimal one. However, it offers almost same delay due to congestion as compared to default protocol.

5.2 Future Work During the evaluation, we found some limitations of our proposed technique and default routing protocol. Scalability is one issue with both protocols. Although CA-HWMP performs better but these both protocols perform good with limited number of mesh nodes. The HWMP did nothing with congestion, but the proposed technique also has limitations which include wait by blocking node in case there is

61

no alternate path to re-route. In such cases, a blocked node waits for the CCNF expiry time to initiate transmission to an already established path which was blocked due to congestion. For the extension of this work, it is recommended to cater the congestion problem in the absence of alternate path. A hybrid technique can also aim to solve this problem. The comparison of our proposed routing protocol with existing congestion control protocol can also consider as its future work.

62

Bibliography [1] J. Ishmael, S. Bury, D. Pezaros, and N. Race, “Deploying Rural Community Wireless Mesh Networks,” Internet Computing, IEEE, vol. 12, no. 4, pp. 22– 29, 2008. 2 [2] M. Seyedzadegan, M. Othman, B. M. Ali, and S. Subramaniam, “Wireless Mesh Networks: WMN Overview, WMN Architecture,” in International Conference on Communication Engineering and Networks IPCSIT, vol. 19, 2011. 2 [3] M. L. Sichitiu, “Wireless Mesh Networks: Opportunities and Challenges,” in Proceedings of World Wireless Congress, 2005. 2 [4] S. Badombena-Wanta and E. Sheybani, “Mobile Communications for Development: Enabling Strategic and Low-cost e-applications for Rural and Remote Areas,” in Wireless Telecommunications Symposium (WTS), 2010, pp. 1–7, IEEE, 2010. 2 [5] M. Portmann and A. A. Pirzada, “Wireless Mesh Networks for Public Safety and Crisis Management Applications,” Internet Computing, IEEE, vol. 12, no. 1, pp. 18–25, 2008. 3 [6] W. Guo and M. Zhou, “An Emerging Technology for Improved Building Automation Control,” in IEEE International Conference on Systems, Man and Cybernetics, 2009. SMC 2009., pp. 337–342, IEEE, 2009. 3 [7] A. Yarali, B. Ahsant, and S. Rahman, “Wireless mesh networking: A key solution for emergency & rural applications,” in Second International Confer63

ence on Advances in Mesh Networks, 2009. MESH 2009., pp. 143–149, IEEE, 2009. 3 [8] X. Wang and A. O. Lim, “IEEE 802.11 s Wireless Mesh Networks: Framework and Challenges,” Ad Hoc Networks, vol. 6, no. 6, pp. 970–984, 2008. 3, 5, 12 [9] I. F. Akyildiz, X. Wang, and W. Wang, “Wireless Mesh Networks: A Survey,” Computer networks, vol. 47, no. 4, pp. 445–487, 2005. 3, 6 [10] A. B. Forouzan, Data Communications & Networking (sie). Tata McGrawHill Education, 2007. 3 [11] R. Karrer, A. Sabharwal, and E. Knightly, “Enabling Large-scale Wireless Broadband: The Case for TAPs,” ACM SIGCOMM Computer Communication Review, vol. 34, no. 1, pp. 27–32, 2004. 3 [12] “IEEE 802.11s/D8.0,” tech. rep., Draft Standard, 2009. 3, 10, 15 [13] K. Shi, Y. Shu, and J. Feng, “A MAC layer Congestion Control Mechanism in IEEE 802.11 WLANs,” in Fourth International Conference on Communications and Networking in China, 2009. ChinaCOM 2009., pp. 1–5, IEEE, 2009. 5, 14, 15 [14] E. Elahi, A. Qayyum, and S. Naz, “The Effect of Payload Length on QoS in IEEE 802.11 s Wireless Mesh Networks,” in 18th IEEE International Conference on Networks (ICON), 2012., pp. 70–73, IEEE, 2012. 6 [15] “IEEE 802.11s Draft 1.00,” tech. rep., Draft Amendment: ESS Mesh Networking, 2006. 7, 10

64

[16] H. Aoki, S. Takeda, K. Yagyu, and A. Yamada, “IEEE 802.11 s Wireless LAN Mesh Network Technology,” NTT DoCoMo Technical Journal, vol. 8, no. 2, pp. 13–21, 2006. 9 [17] S. Ghannay, S. M. Gammar, and F. Kamoun, “Comparison of Proposed Path Selection Protocols for IEEE 802.11 s WLAN Mesh Networks,” in Wireless and Mobile Networking, pp. 17–28, Springer, 2008. 10 [18] R. C. Carrano, L. C. S. Magalhaes, D. C. Saade, and C. V. Albuquerque, “IEEE 802.11 s Multihop MAC: A Tutorial,” Communications Surveys & Tutorials, IEEE, vol. 13, no. 1, pp. 52–67, 2011. 10 [19] J. Camp and E. Knightly, “The IEEE 802.11 s Extended Service Set Mesh Networking Standard,” Communications Magazine, IEEE, vol. 46, no. 8, pp. 120– 126, 2008. 11, 15 [20] R. C. Carrano, D. C. M. Saade, M. E. M. Campista, I. M. Moraes, C. V. N. de Albuquerque, L. C. S. Magalhaes, M. G. Rubinstein, L. H. M. Costa, and O. C. M. Duarte, “Multihop MAC: IEEE 802.11 s Wireless Mesh Networks,” Encyclopedia on Ad Hoc and Ubiquitous Computing, cap, vol. 19, 2009. 11 [21] D. Fu, B. Staehle, R. Pries, and D. Staehle, “On The Potential of IEEE 802.11 s Intra-mesh Congestion Control,” in Proceedings of the 13th ACM international conference on Modeling, analysis, and simulation of wireless and mobile systems, pp. 299–306, ACM, 2010. 14, 17 [22] G. Feng, F. Long, and Y. Zhang, “Hop-by-Hop Congestion Control for Wireless Mesh Networks with Multi-channel MAC,” in Global Telecommunications Conference, 2009. GLOBECOM 2009. IEEE, pp. 1–5, IEEE, 2009. 15

65

[23] A. Raniwala, D. Pradipta, and S. Sharma, “End-to-End Flow Fairness over IEEE 802.11-based Wireless Mesh Networks,” in INFOCOM 2007. 26th IEEE International Conference on Computer Communications. IEEE, pp. 2361– 2365, IEEE, 2007. 16 [24] M. Ahmed and K. A. Rahman, “Novel Techniques for Fair Rate Control in Wireless Mesh Networks,” International Journal, vol. 3, 2012. 16 [25] S. Rangwala, A. Jindal, K.-Y. Jang, K. Psounis, and R. Govindan, “Understanding Congestion Control in Multi-hop Wireless Mesh Networks,” in Proceedings of the 14th ACM international conference on Mobile computing and networking, pp. 291–302, ACM, 2008. 17 [26] B. Staehle, M. Bahr, D. Fu, and D. Staehle, “Intra-mesh Congestion Control for IEEE 802.11 s Wireless Mesh Networks,” in 21st International Conference on Computer Communications and Networks (ICCCN), 2012., pp. 1–7, IEEE, 2012. 17

66