Making Sense of TIA-942 Data Center Standard SS ... - Connect802

4 downloads 1995 Views 5MB Size Report
3. eXecutiVe director message 4 bicsi update. 34-35 course schedule. 36-37 standards report. 38. Volume 28, Number 6. Making Sense of TIA-942 Data Center ...
BICSI news advancing information transport systems

Making Sense of TIA-942 Data Center Standard SS 19 Planning for Factory-Built Plug and Play Systems SS 24 Best Practices for Field Testing Optical Fiber Cabling SS 29

November/December2007 May/June 2006 president’s message

3

EXECUTIVE DIRECTOR MESSAGE 46 bicsi update bicsi update 34-35 course schedule 8 course schedule 36-37 standards report report 12 standards 38

Volume 28, 27, Number 36

Cover Story

Navigating the Migration to Ultra Broadband and 802.11n  | advancing information transport systems | www.bicsi.org

The data carrying capabilities of each element in the network defines the user experience. By joe bardwell

BICSINEWS | November/December 2007 | 

You are about to come to another turn in the road of integrated data communication with the expansion of two emerging technologies: ultra broadband in the wired world and IEEE 802.11n in the wireless world. What are these technologies and how will they impact you over the next few years? Having a solid perspective on these new wrinkles in technology will better equip you to evaluate your options when it comes to data infrastructure design, integration and deployment. What you do not want to happen is an extension of the following unfortunate scenario. Users of a network system complain that when they watch online video or engage in video conferencing the network gets really slow. You decide that the T1 line serving the site does not have sufficient capacity and a newly available 100 Mb/s ultra broadband optical fiber line should be ordered. To accommodate this, you upgrade the infrastructure from the wiring closets to the main wiring room with category 6 cable to support Gigabit Ethernet. You recommend that the switches be migrated to support gigabit connectivity and that the Wi-Fi network be migrated to support 100 Mb/s 802.11n. Two months later, the upgrades are complete but there is no change in user performance. The problem is ultimately traced back to the way a Windows XP workstation handles transmission control protocol (TCP) traffic across the Internet (which, as you will see, can limit a San Francisco to New York connection to less than 6 Mb/s). Yes, new high-speed options are coming that will motivate network managers to upgrade their infrastruc-ture, equipment and applications. Your challenge will be to effectively evaluate the options to make the best recommendations for the design of the network infrastructure as these new technologies are deployed.

Firewall: A firewall device is connected to the DMZ and is the point of contact between the DMZ and the customer’s private network. Core Router: A router is connected to the firewall and creates multiple subnets into the customer’s site. Infrastructure Cabling: All of the devices are interconnected through the building infrastructure cabling system. This includes pulled cable (both copper and optical fiber) as well as the blocks and patch panel systems used at termination points. Ethernet Switch: Ethernet switches are connected to the core router and create a hierarchical distribution system of cable drops. Wi-Fi Access Points: Some (or all) of the cable drops provide connectivity to Wi-Fi access points (APs) that, effectively, extend the Ethernet connection by converting from the copper media to a wireless media (RF radio signal transmission). User’s Devices: Wired devices present themselves with one or more Internet protocol (IP) addresses that act as communications origin and end points. Wireless Users: Wireless devices present themselves with one or more IP addresses in exactly the same way that wired users’ devices do. To the core network, there is no difference between a wireless and a wired device.

General Model of Infrastructure Design To develop this perspective, let’s first establish a general model of a contemporary infrastructure design. Internet Connection: The Internet is connected to a customer’s premises through the services of an Internet service provider (ISP). DMZ: After the Internet connection enters the customer’s premises, there can be unrestricted connections that can be directly accessed. These may include a company’s Web server, for example.

 | advancing information transport systems | www.bicsi.org

Figure 1: General Model of Contemporary Infrastructure Design

Wireless Controller: The APs are managed by a central wireless LAN controller. Figure 1 shows the controller connected to the core router, which implies that the vendor’s equipment can manage APs through a router. If a vendor is limited to Layer 2 management directly through a switch, then the controller would be attached to the switch to which the APs are attached. This is a very general model. An Internet connection can go through a DSL or cable modem and be connected directly to a router that incorporates built-in firewall policies. The DMZ effectively goes away and the “router” is both router and firewall. Perhaps the firewall is nothing more than a simple access control list, or perhaps there is no firewall functionality. Nonetheless, if the pieces of the infrastructure are present, they will be related to each other in essentially the way they are being described. We will use this model for purposes of discussion and leave it to the reader to extrapolate and interpolate the facts relative to any particular real-world network. When you evaluate a network system, you can break it down into these constituent parts to make sense out of what is often a very complex infrastructure. Each part of the model provides an entire market for manufacturers and resellers and embraces new and evolving technology. The challenge is to make sure that expectations in one part of the communications infrastructure (mobile television, for example) are supported by capabilities in other parts of the infrastructure (gigabit switches to aggregate APs and sufficient capacity across the Internet connection, for example). n

No Element of the Infrastructure Stands Alone

No Element of the Infrastructure Can Provide Capabilities that Exceed the Least Capable Element of the Infrastructure To design, troubleshoot, optimize or enhance a communications system you must find the least capable elements and be sure that your proposed change will be properly supported. If you are proposing a change to the least capable element itself, make sure that new capabilities are not bounded and limited by some other element of the infrastructure. Let’s examine each of the elements of the infrastructure. n

Broadband Connection from the Internet The wired connection between an ISP and the customer premises equipment (CPE) device is the broadband connection. This connection is the ultimate bottleneck for any user device that is trying to communicate through the core router to the Internet. This connection is the ultimate bottleneck for aggregate link capacity when multiple users attempt to 10 | advancing information transport systems | www.bicsi.org

simultaneously communicate to the Internet. The “broad” in broadband came from the fact the early CPE connectivity had very limited bandwidth capacity. A 56 Kb/s data line was considered high speed in 1985. With the advent of digital subscriber line (DSL) and data transfer over cable television networks and with the ability of even a simple dial-up modem to provide a 56 Kb/s connection, the term broadband was applied to the higher bit-rate connection capabilities. Today, a high-speed DSL or cable broadband service may provide close to 1 Mb/s of upload speed and perhaps 3 or 4 Mb/s of download speed. A business may purchase 45 Mb/s T3 service for truly high-speed broadband access. In the next few years, we are going to see today’s broadband, even the fastest broadband connections, being viewed as the old, slow services of an earlier time. This is where ultra broadband enters the picture.

Defining the Ultra Broadband Environment Ultra broadband is the term used to describe wired connections from service providers to end users in businesses and homes with a capacity of 100 Mb/s and greater. While discussions regarding this type of high-capacity service have been going on for a number of years, we are just now beginning to see real, practical deployment of ultra broadband service from the major carriers. First, let’s define some terminology that you will hear in relation to the ultra broadband movement: Passive Optical Network (PON): At the provider’s central office, an optical line terminal (OLT) device forms the core of a point-to-point fiber network feeding multiple optical network terminal (ONT) devices that may be in street side pods or installed as customer premises equipment for large buildings. There are several acronyms in this family, each referring to the end point for the fiber connection: Fiber to the premises (FTTP) – A general term referring to the fiber connection between a PON street side pod and any customer premises. n Fiber to the home (FTTH) – In this case, the fiber run is from the street side pod to a private residence. n Fiber to the node (FTTN) – The node refers to street side pods or other intermediate points of distribution in a PON. n Fiber to the building (FTTB) – Another general term used synonymously with FTTP. n



Cable Modem Termination System (DMTS): Cable television utilizes coaxial cable to carry television channels, each in a different frequency range. Unused frequency ranges, and sometimes the separation space

between used frequency ranges, can be utilized to carry a data signal. Two ranges are required, one for signal transmission and the other for reception. A CMTS device is the headend (main) transmitter/receiver for a cable data system, and a cable modem is the device at the user’s end. A very expensive, very high-end CMTS unit is the headend for a community’s local cable company. An enterprise-class CMTS can be installed in a multi-tenant building to provide Ethernet connectivity to the residents or to create a backhaul system for Wi-Fi APs. A low-end CMTS, in the sub-$2,000 range, can push Ethernet into an existing cable TV system for a 50 node network system.

to the curb and to the premises coexists in a vendor’s infrastructure. Figure 2 is a general model of the streetside infrastructure showing how these various methods of interconnectivity may be organized. There is tremendous competition between vendors to provide ultra broadband services to the premises. Not all services are available in all areas or from all vendors, so you will need to do some homework before you upgrade to a faster Internet connection.

Firewall, Servers and Core Router in the DMZ

In the military, a demilitarized zone (DMZ) is a boundary that creates a physical separation where the Data-Over-Cable Service Interface Specification isolation of two areas must be guaranteed. Passing (DOCSIS) through the DMZ requires the authentication and This is an international standard for cable modems validation of appropriate credentials, and inappropriate to allow device addressing, control and management for packages or payloads are denied access. The term “DMZ” data transfer over cable television networks (both public is applied to the network environment to define a and private). perimeter network that is between the external network (Internet) and the private, protected enterprise network. Ultra Mobile Broadband (UMB) Publicly available services (like Web servers) may be 100 Mb/s data service to mobile devices utilizing placed in the DMZ. Protection between the DMZ and technologies such as 802.16m WiMAX, cellular GSM/ the private network is accomplished through a firewall EDGE, high-speed packet access (HSPA) and evolutiondevice. The firewall inspects traffic flowing through it data optimized (EV-DO). This technology may see to confirm that it meets the requirements of rules and initial rollout in the 2010 to 2012 time frame. UMB policies that grant or deny admittance. technologies often are referred to as fourth generation When a wireless network is implemented, it is (4G) wireless. common to expect that both the trusted user community and the visitor or guest community (both connected through the same IEEE 802.11 APs) will need access through the DMZ to the Internet. This poses a configuration challenge. Because the APs are physically on the trusted side of the DMZ, there must be a mechanism by which untrusted visitors and guests, associated through these APs, are denied access to the trusted network. The challenges are increased when considering that visitors may consume bandwidth resources across a Figure 2: General Model of the Street-Side Infrastructure high-speed connection. How will the visitor network be bandwidth-limited and the trusted network be given Street-Side of the Network Infrastructure greater capacity when all the high-speed wireless APs Some carriers are actively upgrading their system are physically connected to the same Ethernet switched infrastructures as they deploy higher speed offerings to infrastructure? their customers. In a particular community, one may This is where a wireless LAN controller comes find that fiber, copper and coaxial cable under the street, 12 | advancing information transport systems | www.bicsi.org

into play. All the major enterprise-class wireless equipment vendors provide capabilities that address the requirements of a nonhomogeneous user community (trusted users, guests, administrators and others). Different vendors have different levels of sophistication and different areas of focus with regard to control and security. As ultra broadband brings greater capacity to the edge of the DMZ, it will become increasingly important to carefully assess the configuration and integration between firewall, core router and wireless LAN switch controllers to make sure the needs of each user community are being managed and met.

Professional cable installers should know how to crimp 568-A and 568-B. They should know not to drag category 5e or category 6 cable over a sharp edge of a steel I-beam, pull a kink tight or exceed 11.34 kg (25 lb) of pulling force. They should know not to have untwisted conductors exposed between the vinyl jacket of a cable and the RJ-45 crimp connector (where they cut off too long and were not able to push the jacket up inside the connector during the crimp). Professional installers should know these things, but some still violate all of the aforementioned rules. As speeds increase to the broadband edge, it is going to motivate (and necessitate) speed increases in the infrastructure. The category 5 cable that supported 100 Mb/s operation is going to give way to category 5e (supporting Gigabit Ethernet) and category 6 (supporting 10 Gigabit Ethernet at less than 100 m [328 ft]). The criticality of the infrastructure cabling system increases as the wired data transfer rates increase. One bad fiber splice on a backbone cable can become the bottleneck in what would otherwise be a properly operating network. The same holds true for connectors and patch panels. The bottom line is be sure that the infrastructure is certified to meet the applicable cabling standard and that new cable installations (both copper and fiber) meet the anticipated evolution of the network. Remember that switches, routers and Wi-Fi APs usually become obsolete sooner than the cabling infrastructure. Make sure it is going to stand up to the challenge.

broadcast domain is defined with a single subnet number. The problem is that anyone who is a member of a particular subnet can directly contact anyone else who is also a member of the same subnet. Moreover, all stations that are connected to the same broadcast domain have the ability to be configured in such a way that they appear on any other station’s subnet. To address this exposure, the implementation of virtual LANs (VLANs) comes into play. A VLAN is a logical segregation of traffic based on the configuration of the Ethernet switches that comprise the network. Ports on the switch are configured as belonging to VLAN 1, VLAN 2 and so forth. The network is now logically divided into separate broadcast domains, and the switch hardware segregates traffic between users. The 802.11 APs must be able to present themselves to the wireless community with more than one identity. These multiple network names are called service set identifiers (SSIDs). An AP must be able to broadcast multiple SSIDs (one for guests, one or more for trusted users, one for network administrators, one for voice over IP [VoIP] and so forth). Each SSID is then mapped (in the AP) to a different VLAN, and the data packets coming out of the AP onto the Ethernet network are segregated into their respective VLANs. The Layer 2 switches then physically control the VLANs so that the guest VLAN is dumped into the DMZ through the firewall. Guest VLAN members have no way to access the trusted network on the trusted side of the DMZ (even though the APs to which they are associated exist as part of the infrastructure that is physically on the trusted side of the DMZ). When you think in terms of capacity, remember that configuration options in Layer 2 Ethernet switches relative to VLAN management may or may not allow bandwidth control by VLAN. The APs themselves may or may not have bandwidth limitation through a wireless LAN switch controller. Ultimately, to support the highest capacity, most secure network systems must have three things: n VLAN support in the Layer 2 Ethernet switched infrastructure. n Multiple SSID support with VLAN mapping in the 802.11 APs. n Bandwidth-limiting capabilities in the wireless LAN switch controller.

Layer 2 Switched Ethernet Infrastructure

Wi-Fi APs and Wireless LAN Controllers

Now we are on the physically trusted side of the DMZ. There are 802.11 APs and end user devices connected to this wired infrastructure through a hierarchy of Layer 2 Ethernet switches. Without any augmentation to Layer 2 switch core technology, the entire wired infrastructure becomes a single broadcast domain. A broadcast domain is closely analogous to an IP subnet, and often a single

We have already discussed the issues related to multiple SSID and VLAN support. Let’s look at what is happening with the new, high-speed wireless technologies. There is a lot of confusion when talking about the speed of a wireless connection. Unless you are consistently applying a definition of speed, there is no

Infrastructure Cabling

14 | advancing information transport systems | www.bicsi.org

way to compare and contrast capacity and performance options. In Through the Looking Glass, Lewis Carroll writes: ‘When I use a word,’ Humpty Dumpty said, in a rather scornful tone,’ it means just what I choose it to mean, neither more nor less.’ ‘The question is,’ said Alice, ‘whether you can make words mean so many different things.’ ‘The question is,’ said Humpty Dumpty, ‘which is to be master – that’s all.’ When someone talks about 802.11b offering 1, 2, 5.5 or 11 Mb/s, they are talking about modulation rate, not data rate. The same is true when someone says that 802.11g offers speeds up to 54 Mb/s. Again, that is a 54 Mb/s modulation rate. Modulation rate refers to how fast certain aspects of the electromagnetic signal are jiggled around to represent ones and zeros. This does not refer to the rate at which the ones and zeros themselves are being transmitted. The actual bit transfer rate for an 802.11b/g transmission is roughly half the modulation rate. When all the wiggling and jiggling of the electromagnetic signal is done, and all the management and control overhead is taken into consideration, an 11 Mb/s 802.11b connection provides roughly 5 to 6 Mb/s of actual bit rate. A 54 Mb/s 802.11g connection provides roughly 20 to 25 Mb/s of bit rate. When speed is defined for 802.11n, the specified speed (100 Mb/s) refers not to the modulation but to the actual bit rate. (Confusing, isn’t it?) There is actually a reasonable explanation for the change in terminology related to the way that 802.11n can transmit bit streams, but that is outside the focus of this discussion. We have been referring to bit rate (as opposed to data rate) for a good reason. Just because you have a 20 Mb/s stream of ones and zeros does not mean that you can transmit a 20 Mb/s stream of MPEG video or carry 20 Mb/s of VoIP phone calls. The reason is that the actual data (video, voice, e-mail or data file) must be carried as payload information inside multiple packets on the network. These packets have a variety of formats that vary based on their purpose. The most common are the TCP packet and the user datagram protocol (UDP) packet. TCP packets have physical address information (media access control [MAC] header), logical address information (IP header), application program and sequence number information (TCP header) and the data payload. TCP packet transmission guarantees delivery of data through an acknowledgment and retransmission mechanism that requires an acknowledgment (ACK) packet to confirm receipt of the data packets. This is how files are transferred across a network since all the information in a file must be preserved without corruption or loss. When you include MAC+IP+TCP you introduce roughly a 7 percent overhead on top of the payload. UDP packets have the same MAC header and IP header, but the UDP header simply contains application program information with no sequence numbering

16 | advancing information transport systems | www.bicsi.org

information. They are slightly smaller than TCP packets and do not guarantee data delivery. This is a typical type of data packet that might be used for streaming video (where the occasional loss of a packet will not cause the received data stream to be unusable). When you include MAC+IP+UDP you introduce roughly a 4 percent overhead on top of the payload. So, a 54 Mb/s 802.11g connection provides roughly a 20 Mb/s bit rate, which translates into roughly 18 Mb/s for a TCP connection and 19 Mb/s for a UDP connection. If you were considering implementing wireless IP video cameras for video surveillance and you wanted a 1024 X 768 color image (with 8-bit color depth) at 15 frames per second using a typical 5:1 compression ratio, then you would have a data payload as follows: (1024 X 768 X 8 X 15) / 5 = 18.87 Mb/s. Using a UDP transport protocol with a 4 percent overhead means you are going to need 19.66 Mb/s of packet throughput, which translates to a 54 Mb/s 802.11g connection. If you were using 802.11n, you might expect to push five such connections simultaneously over the air (because you actually get the 100 Mb/s channel capacity).

User Device Issues The fastest network infrastructure in the world cannot increase the maximum data transmission rate of an individual user’s device. Your notebook computer is going to spin its disk drive at the same speed. The Ethernet or wireless hardware and protocol drivers are going to construct and transmit packets at the same rate without regard for the speed of the network. Of course, a slow network could limit the user’s device, but once the maximum user device transmission speed is reached, nothing can be done with the network itself to improve it. Increasing the bandwidth capacity of a network beyond that of an individual user’s transmission capability allows more users to simultaneously transmit but does not allow any single user to transmit faster. The term “offered load” is used in this context. The offered load is the bit rate that a user device would transmit if it could transmit. How much data would a transmitter transmit if a transmitter could transmit data? Suppose user devices were able to pull data out of their internal memory and construct data packets at a rate of 40 Mb/s. Unfortunately, 20 such devices are connected through 100 Mb/s wired Ethernet and a 100 Mb/s 802.11n network to an Ethernet switch (in the same VLAN). The network has an aggregate capacity of 100 Mb/s, so each of the 20 users experiences 5 Mb/s of usable transmission bandwidth. The offered load is 200 Mb/s (5 users times each user’s 40 Mb/s offered load). Network utilization approaches 100 percent. You now add a second Ethernet switch and connect both switches back to the core router through a gigabit link. There are now 10 users on each switch, but they still have not reached

a point where the capacity of the network exceeds their offered load. They each now are able to push 10 Mb/s, but they continue to have the data backlog and hardware/software capability of pushing 40 Mb/s. This is where bandwidth limiting comes into play. If you are going to expand the number of users in a network (perhaps by adding a wireless network overlay on the existing Ethernet network) and you are thinking about capacity, think in terms of offered load. Going from a 100 Mb/s wired connection to a gigabit connection does not necessarily give you 10 times more usable bandwidth. It may simply open the floodgates to allow a greater percentage of existing offered loads to pass through without giving you any additional capacity to add new devices. There is another aspect to user device data transfer that limits the offered load. Remember that connectionoriented TCP conversations require ACK packets before they can continue to transmit a stream of data. The maximum amount of data that can be outstanding without having yet received an acknowledgment is called the Receive Window Size. In Windows XP (as an example), the Receive Window Size is set at 64 kB. This means a transmitter is allowed to dump up to 64 kB of payload data to the receiver, but then it must wait for an ACK before proceeding. As a result, the round-trip packet delay associated with the transmission and reception of an ACK packet is introduced at least once every 60 kB of payload data. Going from San Francisco to New York, this could be a 100 ms delay (or more). The result is that a Windows XP system has an intrinsic limit of roughly 5 to 6 Mb/s of throughput (for the stipulated crosscountry link). There is nothing to do about it because in this example, Windows XP does not have an easily reconfigured Receive Window Size parameter. There is, of course, an ultimate capacity or limitation of the end device hardware itself. An older desktop PC or a handheld device (perhaps an inventory scanner) may not be able to push 10 Mb/s onto the network. Moreover, an 802.11b-only wireless device (which includes many portable handheld scanners and sensor devices) is absolutely not going to be able to average much better than 2 Mb/s to 5 Mb/s even on a good day. So, there is a low-end limitation that should be ascertained. On the high end, it makes no sense to outfit a desktop PC with a Gigabit Ethernet interface if the backhaul between switches and routers remains at 100 Mb/s. Gigabit and 10 Gigabit Ethernet buildout has to happen from the core router outward. It should be obvious that this type of capacity is going to benefit users inside the DMZ and not those reaching out to the Internet.

Evolving Ultra Broadband Marketplace The screeching sound of a dial-up modem making a connection was the trumpet call that ushered in the original Internet age in the 1980’s. Today it is more like 18 | advancing information transport systems | www.bicsi.org

the sound of taps being played at the funeral of a fallen hero. Consumers, both residential and commercial, have effectively moved completely into the megabit world of DSL, cable modems and bonded T1 connectivity. We are now on the doorstep of the next era in infrastructure where interactive digital television, video surveillance and conferencing, “follow-me” wireless VoIP and other applications are raising the bar of user expectations. We are moving into the world of tens and hundreds of megabits of ultra broadband connectivity. Today’s “really cool” 45 Mb/s DS3 connection to the corporate enterprise is going to look more and more like the status quo, while the new “really cool” pipes are going to be carrying gigabit and 10 gigabit into the building. It’s only a matter of time. Keep your eyes on the horizon so you are not taken off guard by the new technology. It is unclear how many homes and businesses are going to have ultra broadband support over the next 12 months, but it is clear that the carriers are all moving to deploy the capacity. It is unclear as to when 802.11n and 802.16 WiMAX (and very high-speed cellular technologies) are going to become predominant standards, but it is clear that the manufacturers are all moving to bring new, very highspeed wireless gear to the market.

Conclusions To make sense of a network’s integrated wired and wireless capacity, it is necessary to break the network down into functional groups and evaluate the datacarrying capabilities of each group. The critical capabilities will be defined by the type of application that is being run across the network and the performance, security and management requirements for the user community. Just because you increase the capacity of one part of a network infrastructure does not automatically guarantee that end user devices will experience improved performance. The infrastructure is always limited by its least capable element. Probe the technical support representatives of your equipment manufacturers, distributors and integrators to make sure that they are not just selling you specifications in a vacuum. Expanding system capacity by integrating new wired and wireless technologies is a finely choreographed dance of multiple vendors’ technologies and solutions and demands careful attention to assure engineering success. n

Joe Bardwell Joe Bardwell is chief scientist with Connect802 Corporation, a systems integrator and wireless network design consulting firm based in California. Joe can be reached at +1 925.552.0802 or at [email protected].