A Scalable Network Architecture for Distributed Virtual ... - CiteSeerX

4 downloads 4796 Views 287KB Size Report
The network communication architecture combines the advantages of anycasting, unicasting and .... propagated throughout an enterprise [10]. The availability of ..... another group. Figure 5: State transition diagram of the networking module.
A Scalable Network Architecture for Distributed Virtual Environments with Dynamic QoS over IPv6 Mejdi Eraslan, Nicolas D. Georganas, José R. Gallardo, Dimitrios Makrakis * School of Information Technology and Engineering University of Ottawa 161 Louis Pasteur Ottawa, Ontario, Canada K1N 6N5 {eraslan, georgana, dimitris, gallardo}@site.uottawa.ca Abstract Virtual environments (VEs) are interactive computer simulations that immerse users in an alternate reality. Distributed VE (DVE) applications simulate the experience of real-time interaction among multiple users in a shared three-dimensional (3-D) virtual world. In this paper, we identify the network service requirements for highly interactive DVEs and show the limitations of IPv4 in satisfying them. We propose a transport system for DVE applications, which incorporates a scalable network architecture and dynamic QoS adaptation over IPv6. The network communication architecture combines the advantages of anycasting, unicasting and multicasting while the dynamic QoS adaptation assists the DVE applications in adapting to fluctuations in the network. The network-specific issues are hidden from the application by means of a networking module and a well-defined Application Program Interface (API). The implementation of a DVE application, called Virtual Environment Supporting Multi-user Interaction over IPv6 (VESIR-6), using this architecture, is explained. 1

Introduction

DVE is an evolving field of applications that are distributed in nature, where different parties interact within the virtual world concurrently. DVE applications have evolved from distributed interactive simulation experiments for military purposes. Improved computer graphics, and platform independence of the Virtual Reality Modeling Language (VRM L) [1] and Java [2] standards opened the doors for a new kind of multi-user applications including online training, teleconferencing, telemedicine, and electronic commerce in a 3-D virtual environment. Internet could potentially be deployed as the communication and message distribution mechanism for these applications. Internet-based VEs can bring a large number of participants together in a simulated 3-D space and let users explore virtual worlds and interact with each other. Standards for texture and 3-D information exchange between the viewer and application have already been published, and the software tools to browse information on different platforms are publicly available. VRML and Java allow rendering of 3-D objects within the web browsers. A large-scale DVE design proves challenging and requires innovations at various layers of the system architecture. The most significant among these are changes required at the network layer, which defines the services exported by a network. The network communication architecture of a DVE is important in the sense that it is concerned with the multi user technology abilities of the system to share objects and pass messages between those *

objects. If not carefully designed, the network could easily become a bottleneck and corrupt the synchronization among users. For achieving large-scale DVE applications, the available communication bandwidth should be used as efficiently as possible. The design of the network communication infrastructure is important for minimizing latency, optimizing network resources, and supporting largescale virtual environments. Participants in a DVE rely on the network to exchange information. For example, as the user moves within the VE, s/he must transmit updates over the network so that other users will visualize him/her in the correct location. Similarly, if a user picks up an object, other users need to be notified that the object is now being carried. The network is used to transfer this information and synchronize the shared state in the environment. Several techniques have been developed in order to reduce the number of packets sent to the network. Among those, the most important ones are employing compression, dead-reckoning, dividing the world into smaller regions, and packet filtering based on level-of-detail. However, not much research has been done on managing the information flow over the network and efficiently utilizing it. The network capacity is a limited resource, therefore it must be carefully allocated to the various types of flows that the DVE users exchange. In this paper, we propose a network communication architecture suitable for large-scale DVE applications. Our goals are: i) to keep the delays that packets experience in the network within the limits of human

This work was supported in part by a CANARIE Inc. research contract. 1

perception, ii) to reduce bandwidth requirements of DVE users, and iii) to integrate anycast and multicast communication with QoS support over IPv6 to provide a reliable means of collaboration among the users within the DVE. We also aim at minimizing the complexity of networking issues by providing the application developer with an abstraction of the network interface in terms of a well-defined Application Program Interface (API). Finally, we show an example DVE application, called Virtual Environment Supporting Multi-user Interaction over IPv6 (VESIR-6), that uses the architecture we propose. 2

The motivation for using IPv6 There are several design issues to be considered in developing a large-scale distributed VE over the Internet. The most important issue is to optimize the use of scarce network resources. Although local area networks (LANs) are fast enough to support the distributed VEs on a large scale, we need greater bandwidth than the dialup modem connections of 56kbps to support large-scale, multi-user VE applications over the Internet. The current Internet protocol (IPv4) does not provide guaranteed performance levels or QoS support, and native multicasting, that is needed to distribute the update messages to the participants within a multi-user VE efficiently, is optional [3]. To move the Internet forward to a point at which it can truly support multi-user VE applications will require the deployment of a network protocol that provides multicasting, security in terms of authentication and privacy, and QoS services. IPv6 was introduced to achieve these goals. The IPv6 protocol has already been implemented for a great number of platforms, and most router vendors have dual-stack code that can understand both IPv4 and IPv6 packets. 6BONE, the testbed to assist in the evolution and deployment of IPv6, has so far connected more than 400 research institutes and companies in 41 countries over native IPv6 [4]. We will first investigate the network service requirements for multi-user VEs and then see how the next generation Internet protocol, IPv6, addresses these requirements. 2.1

Network service requirements DVE systems are interactive applications, meaning that they must process real-time data input from users. The system, in order to be effective, must present each other with the illusion that the entire environment is located on the local machine and that its actions are having a direct and immediate impact on the environment. We can summarize the requirements of DVEs from the network layer as following: • Low latency: Rapid network response times are necessary in DVEs in order to provide acceptable degree of illusion of reality to the user. Latency requirements between the output of a packet at the application level and input of that packet at the application level of another user should be sufficient to satisfy human perception. Hence, distributed mu lti-user VEs often require a latency of no more than 100 msec [5].



Reliability: Reliability means that the system can logically assume that data sent are always received correctly, thus obviating the need to periodically re-send the information. Packet loss is a crucial issue in some cases such as a bank credit transfer at a virtual mall in the VE and should be eliminated to provide a reliable means of communication. The reliability of the besteffort datagram delivery service supporting the distributed VE should be such that 98% of all datagrams are delivered to all intended destination sites, with missing datagrams randomly distributed [6]. • Multicast: DVEs involve many-to-many interactions and go beyond the client-server paradigm. Distributed VEs require a capability for multicasting and managing multicast group membership at network level. This multicasting capability must be scalable to hundreds of sites, and potentially, to tens of thousands of users. Multicast group members must be added or removed dynamically, in less than a second, at rates of hundreds of membership changes per second. • Service Differentiation: The network protocol must be able to associate packets with particular service classes and provide them with the services specified by those classes. It is desirable in distributed multi-user VE applications to treat different packets differently in the network. For example, state updates related to new users joining the virtual world are more important and urgent than event updates and can be assigned to a higher priority traffic class. Different levels of service should be provided to individual traffic flows and to some objects if necessary. • Secure Networking: Users in the distributed VE must be able to assume that their messages are safe from tampering, diversion, and exposure. Per-packet encryption and authentication mechanisms are needed for secure electronic commerce transactions or classified military simulations. The multi-user VE applications are characterized by the need to distribute a real-time application over a shared wide area network in a scalable manner, such that the users are able to interchange state data and event updates with sufficient reliability and timeliness to sustain the 3-D virtual world containing a large number of mobile (moving) objects. They require bulk transfer of virtual world data, low latency highly-reliable delivery of control commands and state updates, and low latency semi -reliable delivery of event updates. These requirements derive the need for real-time multicast with established quality of service in a shared network. 2.2

Exploring IPv6 for DVEs The current protocol used in the Internet, IPv4, is far away from satisfying the network service requirements of distributed multi-user VEs. We are motivated to use the next generation Internet protocol, IPv6, as the major drawbacks of IPv4 are addressed with the advent of IPv6. Multicast communication, QoS support, authentication extensions for 2

network security, and privacy capabilities are added to the Internet thanks to IPv6 [7]. It should be noted that IPv6 is not fully developed and the QoS and anycasting services provided by IPv6 are still work in progress [8]. We will now give a brief review of the new functionalities in IPv6 that could be useful in designing multi-user VEs. 2.2.1

Multicast Multicasting provides a very flexible and powerful mechanism to distribute messages to a large number of users with minimal network load and distribution effort. Any practical network needs multicasting to implement the required distribution of data to all participating users in the distributed VE. One of the limiting factors with the present IPv4 multicast is the optional nature of this multicast, particularly with respect to the routers. Islands of multicast-capable networks are connected over the Internet using tunneling and build the Multicast Backbone (MBONE) [9]. The use of tunnels, while enabling the initial deployment of multicast in the Internet, appears to limit its potential. IPv6 provides multicast capabilities that do not rely on a certain type of media being used and, in addition, requires that all routers implement those multicast capabilities. IPv4 provides only 228 different multicast addresses, and the multicast packets travel over the network as long as their time-to-live (TTL) value has not reached zero. IPv6 extends IP multicasting capabilities by defining a much larger multicast address space and a scope identifier that is used to limit the degree to which multicast routing information is propagated throughout an enterprise [10]. The availability of a large addressing space reserved for multicast addresses in IPv6 will improve the support of real-time multimedia applications. DVEs would benefit from the availability of the ubiquitous IPv6 multicast service. 2.2.2

QoS Support The next generation Internet protocol is designed to have per-packet basis differentiated QoS without adding a new signaling protocol. The IPv6 specification states that every IPv6 header contains an 8-bit Traffic Class field and a 20-bit Flow Label. The traffic class field identifies and distinguishes between different classes or priorities of IPv6 packets. This field is currently used by Differentiated Services to enable scalable service discrimination in the Internet without the need for per-flow state and signaling at every hop. Differentiated Services maps the traffic class field to a particular forwarding treatment, or per-hop behavior (PHB), at each node along its path [11]. While simple, the traffic class field can provide an adequate QoS indication. The aggregation of packets to form a flow reduces the processing burden on the routers and eases the implementation, as compared to flow-based reservation mechanisms, such as RSVP, on top of IPv6. The IPv6 specification introduces the concept of traffic flows. Flow label is used by the source to label sequences of packets for which it requests special handling by IPv6 routers, such as

quality of service above best-effort or real-time service. Flow label is a random integer that is used as a hash key [12]. In conjunction with the source address, a globally unique flow identifier is created to identify traffic flows in case of collisions. An application that wishes to have a special handling for some packets could mark those packets or datagrams with the same flow label and set the traffic class field to the required service level. When an IPv6 node receives a packet, it examines the flow label field in the IPv6 header, and knows what the packet needs in terms of QoS. The QoS support over IPv6 provides a reliable means of collaboration among the objects within the VE. 2.2.3

Network-level Security The current Internet has a number of security problems and lacks effective privacy and authentication mechanisms below the application layer. The security architecture of IPv6 provides a secure network infrastructure that protects data on per-packet basis. While it is beyond the scope of this paper, these features could be exploited in designing a multi-user DVE and in constructing a secure communication infrastructure for large-scale DVEs over the Internet. 2.2.4

Dynamic Load Balancing DVEs that deploy a central repository, for example a database to keep a consistent view of the environment and to help latecomers join rapidly, needs a means of load balancing since the central repository usually becomes a bottleneck. The most efficient load balancing could be achieved at the network layer, with the minimum latency. This capability is provided with IPv6, by means of “anycasting”. A set of servers can share the same anycast address, and service requests addressed to this anycast address are relayed to the less loaded server. The use of IPv6 anycast addresses for a set of servers minimizes the number of hops as well as the latency to locate and access the server resources. Anycasting provides a versatile and cost-effective model for enabling application robustness and dynamic load balancing. The current IPv6 implementations do not support anycasting and this area is the focus of research. 2.2.5

Fast Packet Forwarding Several techniques have been adapted to IPv6 in order to minimize the latency packets might experience along their paths. The first innovation was to introduce the flow label field that, as explained before, could be used as a cache key, and improve the routing table lookup process significantly. The second was the header format simplification. When a packet travels across a network each node performs a timeconsuming job by processing the packet’s header. Unlike IPv4, the packet options are specified in the extension headers in IPv6, and not all the nodes are required to process those extension headers (for example the destination options header extension will only need be processed at the destination host). Thirdly, the IPv6 protocol makes the Address Resolution Protocol (ARP) obsolete. The IPv6 hosts keep neighbor information in their link-local database. Hence, whatever the underlying link protocol (Ethernet, 3

Token Ring, FDDI, etc.) is, IPv6 will work medium independent. This eliminates the need for protocol conversion (for example, when a router passes a packet from an Ethernet interface to a Token Ring interface). Other than those, IPv6compliant software has another trick of its own to eliminate a latency-inducing transmission phenomenon called packet fragmentation. Fragmentation is a major source of high latency under IPv4. Under IPv6, though, software in the originating computer checks the path to determine the maximum payload size possible that would not need to be fragmented by any node through the network. Then, if necessary, the originating host does its own division of data. This ensures that the packet would travel along its path without any intervention, which would lead to increased latency, by intermediate nodes. 3

Analysis of Network Architectures used in DVEs

The information flow and collaboration among users in DVEs are handled by the underlying communication infrastructure. At the initialization phase, i.e., when the user is trying to connect to the system, the virtual world together with its contents should be sent to the new user. Once the user joins the DVE, event updates are generated when a user creates/removes an object, transfer its ownership, leaves the system, or moves within the DVE. The network communication architecture defines how data will flow among the participants in the DVE. In this section, we analyze the available communication architectures that DVEs use, along with their respective advantages and disadvantages. 3.1

Client-server approach In a client-server system, there is a central database that holds the virtual world contents and keeps track of the object movements and behaviors. When a new client joins the session, the server adds him to its database. The control or update messages are sent to this server, and then the server forwards them to other members. Figure 1 shows a typical scenario of this architecture.

2. Send user database to the new client

Servers can reduce message traffic by applying some filtering mechanisms and compress multiple packets into single messages, eliminating redundant message flow. This architecture, however, is a bottleneck both in networking and processing if the number of members and virtual world content increase. The inclusion of a server in each transaction means that a message may not be able to take the shortest path between hosts from source to destination. In addition, the messages must spend time within the server itself, which adds to the total transmission time. The server is the single point of failure. Using a centralized server for message passing within the virtual worlds is obviously limited to tens of participants because of input and output contention. If all N users in the DVE update their location and report the change to the server, then a total of N*(N-1) messages per update would be sent by the server. Hence, the complexity of the system is in order of O(N2 ). Among the DVE applications, the RING [13] and VNET [14] systems, deploy the client-server architecture. 3.2

Peer-to-Peer Communication Peer-to-peer communication architectures use IP multicast to improve the efficiency of the system by avoiding unnecessary duplication of packets. With this method packets are multicasted to all members at once and the latency is reduced by almost half compared with the client-server approach where messages should first travel to the server that redistributes them to the other members. In peer-to-peer communication, users first join a multicast group to become a member of the system, and then all data are sent to this multicast address. This architecture is used by DIS [15] and the NPSNET [16] using DIS as its communication protocol. Figure 2 shows how easily the collaboration can be established using multicast communication.

Multicast Group 1. JOIN

3. Update user database

New Member

Server 4. Inform other clients

Member Member

1. JOIN 5. Event interactions via the server New Client

Client

2. Object discovery via Event Interaction

Client

Client

Figure 1: Networking VEs using a unicast client-server architecture.

Figure 2: Collaboration using multicast communication. This architecture scales to a large number of users since the complexity of the system is in order of O(N), meaning that it utilizes the available bandwidth efficiently. However, the issue of “latecomers” should be considered when using this architecture. First, how the latecomers would be aware of the users already joined the DVE. Since there is no central repository, a new user might not be able to get the knowledge of the participants in the DVE. Having all users periodically 4

generating periodic “keep-alive” messages, as done in DIS, solves this issue. 3.3

Peer-to-Peer Communication with a central repositor y Pure multicast is not enough to make the distributed multi-user VEs consistent, for two reasons. First, members disconnected due to network failures would still appear in the virtual worlds, thinking of them as observers in the system that do not perform any action but watch other users and objects. Second, when a member joins the multicast group, it might not have knowledge of some members and objects in the system until they send a “keep-alive” message or perform some action and generate an event update. The periodic keepalive messages consume a significant portion of the bandwidth and therefore are not preferable. DIVE [17], SPLINE [18], and MASSIVE-2 [19] architectures uses a central repository in order to keep track of users in the DVE, provide them with a consistent view of the environment, and help them join the system or recover from temporary disconnection quickly. These systems use a central repository while taking advantage of the efficiency provided by the network-layer multicast.

2.Add User Group Multicast Group Manager 3.Initialization and member database 1. JOIN 4. Event Interaction

New Member

Member

Member Figure 3: Peer-to-peer communication architecture with a central repository. Multicast is rapidly emerging as the recommended way to build large-scale DVEs. It provides desirable network efficiency while also allowing the DVE to partit ion different

types of data by using multiple multicast addresses. Using a well-known multicast address, DVE participants can announce their presence and learn about the presence of other participants. 4

Proposed Network Architecture over IPv6

A large-scale DVE design proves challenging and requires innovations at various layer of the system architecture. The most significant among these are changes required at the network layer, which defines the services exported by a network. The network communication architecture of a DVE is important in the sense that it is concerned with the multi user technology abilities of the system to share objects and pass messages between those objects. If not carefully designed, the network could easily become a bottleneck and corrupt the synchronization among users. For achieving large-scale DVE applications, the available communication bandwidth should be used as efficiently as possible. The design of network communication infrastructure is important to minimize latency, optimize network resources, and support large-scale virtual environments. The proposed network architecture consists of a cluster of group managers, multicast groups, and users. Group managers are members of an anycast group and have identical IPv6 anycast addresses. The users are divided into several multiple groups, and they may be subscribed to multiple groups simultaneously. Data distribution and connection paths among the group managers stay hidden from the user. This architecture has advantages in data sharing, reliability, performance, and system growth, and is shown in Figure 4. The group manager keeps the member database, and the users can use the query service provided by the group manager to get information about other participants. The group manager als o generates a unique identifier for each object in the system to uniquely identify itself to the shared virtual environment. It is mainly responsible for maintaining a consistent state for the virtual world. It does this by implementing a kind of database system in which objects can be added, removed, or modified.

5

4.1

The Advantage of Anycasting The use of anycasting service in IP was first proposed in [20], and, like multicasting, has been offered as a standard service of IPv6. The motivation for using anycasting in our architecture is that it considerably simplifies the task of finding an appropriate group manager in the system. It allows the user to choose the closest group manager, reducing the network latency and achieving rapid join times. Another advantage of anycasting is load balancing in case of very large scale DVEs such as the ones used in military simulations. Anycasting is more efficient than mult icasting since anycast addresses use the same address and routing architecture as unicast addresses whereas multicast packets have their own address and routing protocols, and multicasting cannot be supported in the absence of a router. Due to the limitations of the current IPv6 implementations and infancy of the anycasting feature [12], our implementation consists of only one group manager and is a subset of the proposed architecture shown in Figure 4. However, in the future, it is easy to extend it to exploit advantages of anycasting. 4.2

Spatial Partitioning When modeling large virtual environments that accommodate many users, it is often helpful to split up the virtual space into a number of smaller worlds. In large-scale DVE applications a particular object is typically interested in a subset of all other objects in the virtual world. For example, assume that you are at a virtual mall. While you are travelling along the pathway at one section of the mall, you cannot see and therefore would not be interested in whatever is happening at the other sections of the mall. When you are buying shoes, for example, you only need to receive the interactions within the shoe store or those that occur within your vision of sight. The virtual world can be similarly broken up into smaller regions, and region-to-region visibility

can provide a basis for filtering updates in the multi-user VE. Without some filtering mechanism, each simulation must process the data sent by all other object. In large-scale DVEs this processing alone can overwhelm the capabilities of the users. Thus, network load can be bounded, because not every participant has to receive position updates of all the others. Another reason to break up large worlds is to accelerate rendering. It is clearly not practical to download an entire world every time you enter the VE. The group manager recognizes the importance of spatial partitioning and provides explicit support for regions. It stores objects hierarchically; each object has a parent member, which is the owner user of the object, and each member is located under one group, which is the spatial partition the user is residing in the virtual world. The spatial extents for each group are represented by a set of coordinates. Those coordinates correspond to the borders of the partition, and are different for each VE. For example, in a virtual mall, those coordinates would contain the individual shops within the mall. Each partition is associated with a different multicast address, and the group manager is a member of all those multicast channels. When a user moves from one partition to another, the networking module detects the coordinate changes and transfers the user to the corresponding group. The group manager, in turn, sends information about the partition to the user. Further improvements in scalability could be achieved using a network of group managers that cooperates by taking on responsibility for different regions within the simulated environment. 4.3

Object Consistency Since DVEs are highly interactive and dynamic, objects and their attributes have to be modified quite frequently. Objects can be created or destroyed dynamically. Each user can control one or multiple objects. The user who creates the object is the default owner of that particular object, although

Group Managers with identical anycast addresses Group Manager

Group Manager

Anycast Group Multicast Group

Users forward traffic to any group manager Group Manager QUERY Group Addresses

Multicast Group

JOIN

Figure 4: The proposed network communication architecture. 6

Primitive

Table I: Networking Module API. Method Join

Connection management Group management Ownership management QoS management Object management

Leave AddMembership DropMembership RequestOwnership GrantOwnership SetPriority GetPriority EventUpdate TextMessage GesturalMessage

others have the chance to ask for the ownership of the object later. The system architecture must support concurrency control for managing conflicting user actions. If a user tries to open a door while another is trying to close it, what happens? We solve this issue by associating each object with only one owner. This ownership mechanism is necessary to restrict more than one user modifying an object simultaneously, to prevent contention, and to maintain consistency in the virtual world. The group manager prevents multiple users from simultaneously modifying the same data. The ownership mechanism ensures that only the owner user has the ability to update object attributes and delete object instances. Users must explicitly request and obtain the ownership of the specific object before updating it. When acquiring the ownership of an object, the user sends a REQUEST_OWNERSHIP message to the group. The owner of the object will then either grant the ownership and reply with a GRANT_OWNERSHIP message, or ignore the request. If there is no reply after a certain timeout, the requester knows the ownership was not granted. This mechanism requires fewer messages to be transferred over the network. 4.4

Networking module Networking considerations define the basic part of the real-time collaboration among diverse users in a distributed VE application. Our goal is to create a transport system such that users would be unaware of the distribution, i.e. the distribution would be transparent. We, therefore, separated the network part from the system, and build another flexible component, called networking module, to handle low-level aspects of collaboration. The networking module of the application provides mechanisms for handling packet loss, receiving data out-of-order, consistency, synchronization, and connection management. It manages the connections among the participants, and handles the possibility that the data might get delayed in transit. The networking module Application Program Interface (API) provides the higher level modules with access to the

Parameters

localIPv6Address, groupManagerAddress, port, username, avatarURL localIPv6Address, partitionAddress, port localIPv6Address, partitionAddress, port ObjectId objectId, requestingUserId flowType, trafficClassValue flowType objectId, field, value objectId, message objectId, gestureIndex, value

network. The networking module API primitives, as shown in Table I, are structured in the following categories: • Connection management primitives, which allow the users to establish a connection to the group manager, receive group list, and leave the system. • Group management primitives, which allow the user to join a multicast group, get group information, and gracefully leave a group. • Ownership management primitives, which allow the user to request or transfer the ownership of an object in the DVE. • QoS management primitives, which allow to learn or change the default network QoS values. • Object management primitives, which allow the operations performed or initiated by the objects in the DVE. The networking module is mainly responsible for two tasks. First, the networking module is in charge of determining which group to join and detecting when a different group must be joined due to the movements within the virtual world. Second, it is responsible for passing data between the application and the communication link over IPv6. The state transition diagram of the networking module is shown in Figure 5. The entire network communication is made transparent to the user by means of the networking module. When the user runs the DVE application, the networking module sends the necessary query messages to the group manager. The group manager then responds with the list of available groups and the networking module presents this group list to the user. Once the user chooses which group to join, the networking module does the rest to log the user into the system. During navigation, the networking module forwards the updates generated by the local user to the network, and listens for remote updates transmitted by diverse users connected to the system over the network.

7

Send query request to the group manager

IDLE

Free system resources

QUERY

Receive list of groups available

MEMBERSHIP DROPPED

GROUP LIST RECEIVED

Send join message to the group

RECEIVE

Receive network packet

JOIN

Receive acknowledgement packet with user ID Join another group ADD

LEAVE

Send disconnection request to the group manager

CONNECTED

MEMBERSHIP

Send network packet

Leave a group

DROP MEMBERSHIP

SEND

Figure 5: State transition diagram of the networking module. Reading from the network is a blocking operation, i.e. the application would be blocked until there are some data to be read over the network. In order to eliminate the blocking behavior, the networking module creates a separate thread called “Receiver”, whose main task is to read data from the network. After the user is finished with the application, s/he terminates the receiver thread, and the networking module informs other members and the group manager of its disconnection from the system, and then frees the system resources. 5

Providing Dynamic QoS at the Network-level

DVE networks are carrying more data in the form of bandwidth-intensive, real-time interactive traffic, which stretch network capability and resources. Traffic management in such network is therefore an important issue and involves managing allocated resources at data transfer to ensure continuous maintenance of required QoS levels. If not managed efficiently, the network could easily become a bottleneck. As more users participate in the distributed VE, the aggregate amount of information generated by the application increases. However, the network capacity is a limited resource, so that the system must be carefully designed to determine how to allocate the network capacity to the various types of information that the diverse users in the VE must exchange. For example, when a user connects to the multi-user VE through a modem connection that provides minimal network resources, it is desirable to differentiate among the various traffic flows in order to ensure consistency within the VE. Our research objective in this section is to develop and evaluate an end-to-end QoS framework to be used with our network architecture. Our proposed technique distinguishes between those messages that are causally significant and

those that are merely contingent. We maintain causal consistency only for the former. This architecture achieves a further optimization in state consistency by anticipating the effect of interactions allowing control commands and ownership exchange to be carried out in advance, as will be explained below. The basic building blocks for the dynamic QoS architecture we propose is traffic classification, packet marking, and then output scheduling of the marked packets. 5.1

Traffic Classification Traffic classification means that traffic gets grouped or aggregated into a small number of classes, with each class receiving a particular QoS in the network. The aim of packet classification is to group packets based on predefined criteria so that the resulting groups of packets can then be subjected to specific packet treatments. Selecting the mission-critical packets is of significant importance to ensure that they get the service they require even in times of congestion. DVEs usually generate two types of network packets: Event updates, and less frequent control commands. The event updates in VEs are frequent and if some are lost or arrived out of order, the next updates will correct these errors. Event updates occur when the user is moving within the virtual world. As this user is moving, his or her system sends out a series of positional messages. If one of those positional messages is lost, the system can pick up subsequent messages and ignore the dropped message. For example, a series of positional events are multicasted when an avatar is moving. If one of these positional multicast messages is dropped, the position will be updated in the next message and other users will simply see the avatar jump to the new position. To some extent event updates can compensate for excessive latency by extrapolating continuously changing values, as used in dead reckoning of object movement based on initial position plus velocity and acceleration. However, the control commands 8

Flow 1 (no loss priority) Control commands Networking Module Event updates

Traffic Classification

Output queue IPv6 Multicast Group

Flow 2(low delay priority)

Packet Marking

Output Scheduling

Figure 6: Proposed dynamic QoS architecture consists of traffic classification, packet marking, and output scheduling. involve state changes and are of higher importance. For these reasons, we separated the event update and control command traffic into different flows, and assigned different service levels to the flows as shown in Figure 6. Our architecture extends this classification to provide dynamic QoS to the application by splitting the Expedited Forwarding (EF) service classes. We used a scheduler to decompose event updates into different flows. Those flows would be assigned a set of priorities in order to guarantee a graceful degradation of service in case of network congestion. If we assume that the traffic is synchronous and interval between successive packets generated being I, we can bound the delay between two packets received by the remote end of the DVE application to be N*I, where N is the number of EF classes , even in network conditions where we could only forward fraction 1/N of the generated packets. For example, assume that the event updates occur at 10ms intervals , and we have two service classes EF1 and EF2, the latter having higher priority. In this example, the generated packets will be assigned alternating service levels (EF1 EF2 EF1 EF2 …). When the network is congested, the low priority packets belonging to EF1 service class will be dropped first. In this case, the event updates are still transmitted, but at 20ms intervals , which is the rate EF2 packets are transmitted. The result is a dynamic QoS adaptation, incrementally reducing the service level in response to network congestion. This is better than a burst error in which all of the sequential packets will be lost, and the user might get disconnected from the system for a while. In our dynamic QoS adaptation scheme, the networking module takes remedial actions to scale flows when resource availability or user QoS requirements change. The adaptation of application to network conditions is introduced here as a technique to make data transmission more efficient, and to improve resource management in the network. By means of this architecture, DVEs can be made to adopt to fluctuating network conditions with minimal perceptual distortion. With slight modifications, this architecture can provide expedited handling appropriate for a variety of real-time multimedia applications.

5.1.1 Packet Marking Once we classify packets, the next step is to “mark” packets with a unique identification to ensure that this classification is respected end to end. The simplest way of doing this is the IPv6 traffic class field in the header of an IPv6 datagram. Being compliant with the standards, we used the DSCP [21] as the classification criterion of choice. The marking is achieved by setting the protocol, flow label, and DSCP fields of the packet. In our architecture, we used the Expedited Forwarding (EF) Per Hop Behaviour (PHBs) [22] and Assured Forwarding (AF) PHB [23]. Control command packets (flow 1) are assigned AF service such that the drop precedence for the packets in this flow will be low. Event update packets (flow 2) are assigned EF service so that the packets in this flow will receive low latency. The corresponding DSCP codepoints used are 100010 for control commands, and 101110 for event updates . The purpose behind this kind of marking of packets is to ensure that downstream QoS features such as scheduling and queuing may accord the right treatment for packets thus marked. Classification allows leveraging the services, and provides a means of differentiation among the multiple traffic flows within the same application 5.1.2

Output Scheduling Once traffic has been classified the next step is to ensure that it receives special treatment in the nodes. This brings into focus scheduling and queuing. The packet-scheduling disciplines arbitrate access to link bandwidth and provide performance guarantees to applications. The packet scheduler determines the order in which each packet is served (transmitted). The simplest scheduling algorithm consists of ordering packets as a function of their priority. In this way, packets with higher priority are transmitted first. This method of transmission can cause an indefinite waiting period for lower priority packets if the traffic of higher priority data is very heavy. To avoid this kind of problem, usually, multiple queues are used, with a minimum service rate associated to each queue. In our testbed, we have implemented Priority First In First Out (PFIFO), and Class Based Queuing (CBQ) queue management schemes.

9

The packet scheduler is DSCP-aware, that is, it is able to detect higher priority packets marked with precedence by the application and can schedule them faster, providing superior response time for this traffic. As the priority value increases, the algorithm allocates more bandwidth to that flow to make sure that it gets served more quickly when congestion occurs.. 5.2

Achieving Dynamic QoS End-to-End According to the DiffServ, the ISPs provide Service Level Agreements (SLAs) to their customers [10]. An SLA basically specifies the service classes supported and the amount of traffic allowed in each class. Users mark the DS field of individual packets to indicate the desired service. At the ingress of the network, packets are classified, marked, scheduled, and possibly shaped. When a packet enters one domain from another domain inside the network, the DS field may be re-marked, as determined by the SLA between the two domains. Core routers inside the network implement classification, as required by the DiffServ standard, and sophisticated classification, marking, scheduling operations are only needed at the boundary of the networks. This allows core routers forward packets very fast, and boundary routers that link slow customer connections spend more time on classification, policing, and shaping. Our architecture categorizes the traffic based on the service requirements, and introduces a way to meet their needs. It prioritizes control commands to ensure that missioncritical packets get low drop service, while simultaneously servicing event update packets to minimize latency. It optimizes networking efficiency by relaxing the reliability requirements of event updates and exploiting IPv6 multicast and QoS capabilities. Although our architecture does not provide more bandwidth, the DiffServ service levels are guaranteed to outperform the best-effort traffic. Even though IP QoS is in its infancy, it is quite clear that it will be an absolute requirement in next -generation distributed applications. By implementing the network architecture on top of IPv6, we are guaranteed that all network nodes are DS-aware. The DS-capable IPv6 routers will treat the traffic generated by our architecture based on their service level requirements. 6

VESIR-6: An example DVE application using the Proposed Architecture The network architecture we propose is deployed in a DVE application called Virtual Environment Supporting Multi-user Interaction over IPv6 (VESIR-6) [24]. The users are able to run VESIR-6 over any web browser with a 3-D viewer plugin. This assures platform independence and easy integration to the Internet. They need a dual-stack machine since the networking is done over native IPv6 communication. There are many IPv6 implementations for Windows NT, Solaris, and Linux operating systems, and most of them are distributed freely [4]. By installing IPv6 to their computers, the users will not sacrifice their IPv4 connectivity, since IPv6 implementations support the dualstack approach.

6.1

VESIR-6 System Architecture The VESIR-6 system consists of four independent modules: User interface, virtual objects, interaction agents (VIAs), and the networking module. Each module constitutes a separate library with minimal dependencies. The system also needs an external information source that will provide the complete and coherent state of the virtual world content to the users, especially to the new members. The group manager is used for this purpose. Unfortunately, not all envisioned features in our proposed network architecture were implemented in VESIR-6. Especially, since IPv6 is still in its infancy, not all the features proposed in the standard are exploited in the implementations yet. For example, anycast has not been deployed in IPv6 implementations yet, and the research on how to manage anycast addresses is still going on. The group manager therefore is a member of a “wellknown” multicast group, as shown in Figure 3 previously. This technical issue made it necessary to redefine the group manager somewhat, but the core features of the multi-user environment have been realized well over IPv6. The users simply send their requests to this multicast group, and the group manager listens for user requests on this multicast channel. The overall system architecture of VESIR-6 is shown in Figure 7. The users are able to grab objects they own and move them within the DVE. Object management is a joint effort of the user interface and VIAs. The main issues in object management are consistency and ownership. Consistency is achieved by separating the object state updates from event updates by VIAs, and assigning service levels based on their requirements. The ownership management is achieved by the user interface, where users can transfer ownership of the objects if requested. Our user interface provides the user with the necessary means to create, delete, or ask for ownership of an object. In VESIR-6, each user has a local copy of the data they share, which we call the world model. These data embody the virtual world contents including the objects and users within it. The users can modify the content of the environment by importing new objects into the shared space. Interaction agents control the actions of the objects. To avoid modification conflicts, each object in the virtual world has one user as its owner and only the owner user can modify the object. However, the ownership of an object can be transferred from one user to another. The ownership management part is responsible for these tasks. The ownership management layer keeps track of objects deposited by various users with their access rights and updates. When a user tries to modify an object, this layer sends the update if access rights permit. The local events are detected by interaction agents and forwarded to the networking modules. Before sending the local update, the corresponding interaction agent applies message filtering based on the ownership mechanism. The interaction agents are also responsible for receiving any remote updates and modifying the specific objects in the local replica of the environment accordingly. 10

VESIR-6 User

VESIR-6 User

Virtual World

Virtual World

Local events

Remote events

Interaction Agents World Model

Object Database

Ownership Management

User Interface

Local events Group Manager Member Group Database Management Query Consistency Service Management

Remote events

Interaction Agents World Model

Networking Module

Object Database

Ownership Management

User Interface Networking Module

IPv6 Network

Figure 7: Overall system architecture of VESIR-6. The update messages are sent whenever an object updates its attributes or an interaction takes place. The keepalive messages, which might be useful to maintain consistency, are not used as they waste a lot of bandwidth. To ensure that the object updates sent by a user can be decoded by the receivers, the system must provide the receivers with the knowledge of the objects in the environment. It is possible to include the relevant objects in each update, but the update messages would then be tremendously large and bandwidth resources would be wasted. This can be done more efficiently, by ensuring that users contain the objects in their local database. Hence, in VESIR-6, the users keep a local database of the objects. Each object has a unique descriptor, and only the descriptor is attached to the update message in order to identify the particular object. The virtual world is fully replicated at each user side, therefore, during each state update, only the changing part of the database is sent to the clients. 6.2

Networking Module The data sharing is supported via the networking module. The update packets and other information are sent out over the multicast network. Each user maintains a database of objects whose state is updated using the received data. Networking module separates the commu nication management from the application design, and thereby reducing the effort required to built a networked VE, and can be used independently from VESIR-6 as well. The generic architecture of the system allows VESIR-6 to use any networking module that deploys any kind of underlying network communication architecture and protocol. The networking module resides at the user computer. It is primarily used to encapsulate network calls and to access

the IPv6 stack on the host machine. The necessary IPv6 network calls are integrated into a Dynamic Link Library (DLL) that is loaded by the application dynamically at runtime. The networking module is composed of two independent threads: One for forwarding the data to be transmitted to the network, and the other for receiving data from the network. Each user’s environment has its own receiver and sender components, to communicate with other users’ environments. Both sender and receiver run asynchronously with events and virtual agent processes. This ensures fast response time and good performance over low bandwidth networks. The data exchange in both cases is achieved by means of IPv6 socket connections. When users exit the application, the networking module sends a message to the multicast group to disconnect from the system, and then the active threads and sockets are closed, and the system resources are freed. 6.3

Group Manager The group manager’s basic responsibility is to provide object consistency within the DVE. Individual users may be looking at different part of the virtual space from different viewpoints but they all have the same set of objects loaded in their worlds at a given time. Once running, the group manager receives commands from users and in turn updates its world database accordingly. When a new user joins the system, the group manager sends him information about the environment and other users. It might be desirable to divide the world model into regions for scalability. The group manager is designed to provide a query service and ease this kind of partitioning. The group manager is a stand-alone IPv6 server, which is implemented in C++. It implements a hierarchical linked 11

list for saving the virtual world contents. It accommodates the system users inside the groups they belong, and places the objects under their owner members. This storage mechanism is very efficient in the sense that lookups and modifications are fast. Separating the users into multiple groups allow the environment be divided into spatial partitions, so that each region in the virtual world would correspond to an individual group. There will be a burst incoming traffic to the user when s/he moves from one partition to another. This move is managed by the networking module and is equivalent to the user leaving one group and joining the other. The group manager will be responsible for providing the user entering a partition with the current state of the VE for that partition. We have included all necessary network functions in order for the users to move from a group to another. The applications using spatial partitioning need this ability in case the user moves from one region in the VE to another. 6.4

VESIR-6 Trials over CA*net II In order to show the capabilities and possible applications of VESIR-6, we have tested it by launching a 3D telelearning application among three IPv6 islands linked through CA*net II. The participating nodes at the University of Western Ontario, the Communications Research Center (CRC) in Ottawa, and the University of Ottawa Multimedia Communications Research Laboratory (MCRLab) ran a VESIR-6 tele-learning application aimed at teaching the installation and maintenance of Newbridge products as shown in Figure 8. This trial provided a demonstration of the way that an important application, distance training, can be implemented in a more powerful way, through a distributed interactive virtual environment, over the next generation Internet protocol.

MCRlab IPv6 Island

6.5

Traffic Measurements and Modeling Extensive research has been conducted in the past, related to the characterization of the traffic encountered in modern telecommunication networks. Such studies have focused on individual traffic sources and on aggregate network traffic that is encountered in today’s communications systems, examples being www, ftp, telnet, rlogin, video, Ethernet, and ATM traffic. However, to the best of our knowledge, no work has been done dealing with the traffic characterization of virtual reality applications. For the purpose of traffic monitoring we have used the traffic analysis apparatus Inter Watch 95000 (IW95000) ATM/LAN/WAN protocol analyzer ®, from GN Nettest. We monitored the traffic produced by a user to invoke changes in the virtual environment. The traffic coming to the user notifying of the changes that occurred in the virtual environment will be the aggregation of messages generated on all other users. We found that the average transmission rate was roughly 4.2 Kbps. In what follows, a more quantitative traffic analysis is performed. The outcome of this task is an empirical traffic model for the virtual reality application. From Figure 9, which shows a partial picture of the behavior of packet size vs. packet arrival time, we can see that the packet size has almost always the same value of 92 bytes, with a few exceptions of packets having 64, 82, 130, 180, or 232 bytes. Also, a periodic rather than asynchronous behavior can be observed in the generation of packets. In order to verify the periodicity in the generation of packets, Figure 10 shows a plot of the packet inter-arrival time. It can be observed that there is a bimodal behavior for the packet inter-arrival time, in the sense that the values tend

CRC IPv6 Island UWO IPv6 Island

OC3 OC3

OC3

CA*net II

Figure 8: VESIR-6 trials over CA*net II. 12

to accumulate around two different values, which correspond to approximately 10 and 280 ms. We then propose a bursty (ON/OFF) model for the generation of packets. In this model, the shorter packet inter-arrival times correspond to the dis tance between packets within the same burst, while the longer packet inter-arrival times correspond to the distance between packets corresponding to different bursts. Figure 11 displays the burst inter-arrival time, while Figure 12 shows the burst size behavior. The burst-interarrival-time empirical pdf (probability density function) can be approximated using a Gaussian distribution with mean 0.282 and variance 1.5×10-4 , whereas the burst-size pmf (probability mass function, since it is a discrete random variable) can be approximated using the sum of 1.0 plus a Poisson random variable with parameter λ equal to 0.68. The empirical and approximated curves are compared in Figures 13 and 14. If we wanted to generate this kind of traffic artificially, for testing through simulation or experimentation purposes, we could use the empirical distributions obtained from the corresponding histograms or the approximated analytic expressions. A general conclusion that can be drawn from our model is that the source has an ON/OFF behavior. However, the distribution of the time that the source remains in the ON

state does not appear to have long-tailed characteristics. The same applies for the OFF state. This observation leads us to the conclusion that aggregation of the traffic produced by such sources should have short-range dependence characteristics. This is in contrast to the models describing other types of traffic (e.g. ftp, r-login, www, etc.) which are governed by long-tailed distributions [25] [26] and, when aggregated, they produce long-range dependence, as explained in [27]. This also suggests that this type of traffic will be friendlier to the network and the competing applications. 6.6

Behavior of the application under competing traffic and network impairments It is important to have a clear idea in terms of how the application is affected by congestion and network impairments. Therefore, we have carried out experiments in order to assess this. In our analysis, we have included the Best Effort (BE) traffic, the EF classes described in our architecture, and two queue management schemes used to implement the PHB services: Priority First In First Out (PFIFO), and Class Based Queuing (CBQ).

Packet interarrival time (sec)

Packet size (bytes)

250 200 150 100 50 0 0

5

10

15

0.5 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0

20

0

2000

Packet arrival time (sec)

Figure 9. Packet size vs. packet arrival time

6000

8000

Figure 10. Packet interarrival time vs. packet number 16

2 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0

Burst size (packets)

Burst interarrival time (sec)

4000

Packet number

14 12 10 8 6 4 2 0

0

2000

0

4000

500

1000

1500

Burst number

Burst number

Figure 11. Burst interarrival time vs. burst number

Figure 12. Burst size vs. burst number

13

0.30

0.60

0.25

0.50

0.20

Empirical pdf

0.40

0.15

Approximated pdf

0.30

0.10

0.20

0.05

0.10

0.00

Empirical pmf Approximated pmf

0.00 0

0.2

0.4

0.6

0.8

1

0

Figure 13. Empirical and approximated burst-interarrivaltime pdf 6.6.1

Testing the application under competing traffic The Ethernet segment that connects the relevant user with its corresponding multicast group is loaded with background traffic produced by the traffic generator of the IW95000 testing tool. The generated background traffic corresponds to previously recorded traffic streams, collected over Ethernet segments; thus the competing traffic has the statistical behavior of real traffic. We run the experiments for different volumes of background traffic. Six experiments were performed for each value of the network loading. The common observed phenomenon is that, when the network load is high (above 50%), competition causes for the application to detect inconsistencies. The average time between inconsistencies ranges between 5 and 6 minutes. For traffic loading volumes above 99%, the application shows a fast deterioration and cannot function at all. Figure 15 displays the percentage of packet loss experienced by the traffic stream at the core router, for different values of background traffic. We observe the significant improvement in packet loss rate when using the DiffServ (EF) configurations as opposed to the BE traffic. Among the Diffserv service classes, EF1-PFIFO provides the worst performance. At 50% loading, it gives a packet loss rate at 10-5 ; at 90% loading, this becomes 4.5x10-5 .

10

15

Figure 14. Empirical and approximated burst-size pmf

6.6.2

Testing the application under network impairments This set of experiments has the purpose of assessing the effect of packet losses and bit errors inside the packets. We first inserted bit errors in the packet payload (information field), then in the packets headers, and finally in both payload and header. Errors were inserted randomly. Note that errors in the headers cause for packets to be lost or misdelivered, while errors in the payload causes that packets with corrupted information be correctly delivered to their destination. Table II summarizes our results. Table II: Performance under bit error insertion Bit error In In In Both Payload rate Payload Header & Header û ü û 10-2 û ü ý 10-3 ü ü þ 10-4 û: ý:

þ:

Packet Loss Rate

0.12 Packet Loss Rate (%)

5

Burst size (packets)

Burst interarrival time (sec)

ü:

0.1 0.08

Means that the VEs are not synchronized and the application detects inconsistencies. Means that the percentage of packets with higher latency than 100ms is more than one, i.e., the application did not detect inconsistencies but there is appreciable delay to receive updates . Means that the percentage of packets with higher latency than 100ms is less than one, i.e., the application experienced some acceptable delay, but otherwise worked fine. Means that all packets have a latency lower than 100ms , i.e., the application worked fine,.

0.06 0.04 0.02 0 0

10

20

30

40

50

60

70

80

90

100

Background Traffic (%)

Best Effort

EF1-PFIFO

EF1-CBQ

EF2-CBQ

EF2-PFIFO

Figure 15. Loss rate of packets versus background traffic loading

From these experiments, we can conclude that the application is considerably more sensitive to errors appearing in the payload than in the header. This may be due to the fact that it is more harmful for the application to receive erroneous information than not to receive it at all; in the latter case, the loss only implies an additional delay before receiving up-to-date information, which will arrive anyway with the next update. When the errors appear in the payload, on the contrary, the delivered information is wrong, and its use degrades the performance of the application since it makes it very likely that subsequent updates will be

14

inconsistent with the erroneous information used from the corrupted packets. The next experiment consisted on removing complete packets from the link. The results for these experiments are summarized in Table III below. Table III: Performance under packet losses Packet loss ratio Result ü 10-1 -2 ü 10 -3 ü 10 -4 ü 10 ü: Indicates that the all packets have a latency lower than 100ms, i.e., the application worked fine. We can observe that the application is largely insensitive to packet losses. This confirms our observation from the previous experiment, in the sense that losing information is less harmful that getting erroneous one. This of course has a limit, as observed in the tests performed on the application under heavy competing traffic (section 6.6.1); since too many packets are lost, the excessive delay caused by these losses still causes the application to starve. 6.6.3

Forwarding Delay Analysis of Event Updates The DVE applications are highly sensitive to end-to-end delay. Should a packet be delivered 100 msec after generation, it is considered outdated. In figure 16, we have plotted the fraction of the packets that go over a certain value of forwarding delay. The fraction of packets with forwarding delay of more than 10 ms is 62% for BE traffic, 35% for EF1-PFIFO, 29% for EF2-PFIFO, 16% for EF1-CBQ, and 15% for EF2-CBQ service classes . It is quite evident that BE is performing very badly, while the DiffServ (EF) configurations are performing well. The fraction of packets experiencing forwarding delay higher than 20 ms is very small (less than 1%) in all cases except for BE traffic (56%). From the results, among the DiffServ classes, we observe superiority of CBQ over PFIFO. In addition to this, we also observe that the forwarding delay of the EF2 packets is slightly less than the EF1 counterpart.

Fraction of Forwarded Packets

Forwarding Delay Distribution

0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 10ms

20ms

40ms

60ms

80ms

100ms

Time (ms)

Best-effort

EF1-PFIFO

EF1-CBQ

EF2-CBQ

EF2-PFIFO

Figure 16. Fraction of forwarded packets whose forwarding delay exceeds certain size. 6.7

Scalability Observations We have used a wide area network with three nodes in two Canadian cities (Ottawa, London) to obtain the traffic measurement results. In this section we will do some analysis based on those results in order to see how the traffic will scale with the number of users in the DVE. The main observation is the low traffic rate generated in the system (4.2kbps). This is achieved as a result of the following features of our architecture: • The users keep a local replica of the DVE (including the objects and their owners). Therefore, only incremental updates are generated. • Users are interested only in updates happening in their partition. • The event updates (movements and gestures ) are encoded so that the packet size would be small (about 92 bytes as observed in the measurements). The traffic generated within a group depends on the size of the partition. In the worst case, all users would be in the same partition. In this case, the average traffic generated by one user would be 1.4kbps (4.2kbps/3). The complexity of our architecture is O(N). Hence, a 56k dial-up line would support a DVE using our architecture with dozens of users. For a FastEthernet network, this extends to tens of thousands of users, which is quite adequate even for large distributed military VE simulations. We have also induced background traffic into our system. The results show that our architecture behaves well under heavy traffic. For example, at 90% loading, EF1PFIFO has a packet loss rate of 4.5x10-5 . For the same traffic class, the fraction of packets whose forwarding delay exceeds 20ms is 2x10-3 . Those figures support the requirements of DVEs, as described in section 2.1, and furthermore prove that our architecture can scale to accommodate many users without degrading the service. 7

Conclusion

The ability to construct a large-scale distributed VE depends on the underlying network infrastructure. When we analyze the network service requirements for distributed multi-user VEs, we observe that the current Internet protocol, IPv4, is far away from satisfying those requirements. DVEs being one of them, IPv6 offers a high quality network infrastructure for real-time multimedia applications with its capabilities such as multicast, per-packet level QoS support without a need for additional signaling mechanism, authentication extensions, and data privacy and confidentiality. The security extensions of IPv6 provide security at network level and will encourage people to do electronic transactions without worrying about any kind of eavesdropping or theft in electronic commerce applications over DVEs. We have designed a scalable and robust network architecture, using IPv6 anycast and multicast 15

communication, suitable for large-scale DVE applications over the Internet. The network infrastructure deployed IPv6 anycast to locate an appropriate group manager, and IPv6 multicast to transfer state and event updates of the objects within the virtual world. We deployed IPv6 QoS capabilities to manage and utilize the scarce bandwidth resources efficiently. Characterizing the multicast network traffic into different flows and assigning service levels as needed by those flows helped maintaining the world consistency. We have shown how the networking module permits dynamic QoS adaptation using different service classes in case of network congestion while still maintaining a high degree of transparency for applications. Traffic measurement results prove the feasibility of our architecture for DVE applications. We have used the proposed network architecture with the dynamic QoS features to create a large scale multi-user VE, called VESIR-6, that is capable of working over wide area networks such as the Internet via low bandwidth dialup lines. VESIR-6 shows the possibilities of using IPv6 capabilities to provide a high-quality network infrastructure for distributed virtual environments over the Internet and can be used for conducting online distributed training, interactive simulations, or electronic commerce applications. REFERENCES [1] Information technology, Computer graphics and image processing, “The Virtual Reality Modeling Language (VRML)”, International Standard ISO/IEC 147721:1997 (also referred to as VRML97). [2] http://java.sun.com [3] Postel, J., "Internet Protocol", STD 5, RFC 791, September 1981. [4] http://www.6bone.net [5] “Communication Architecture for Distributed Interactive Simulation (CADIS)”, Institute for Simulation and Training, Orlando, Florida, 28 June 1993. [6] Miller, D., “Distributed Interactive Simulation Networking Issues”, Briefing presented to the ST/IP Peer Review Panel, Massachusetts Institute of Technology, Lincoln Laboratory, 15 December 1993. [7] Deering, S. and R. Hinden, “Internet Protocol, Version 6 (IPv6) Specification”, RFC 2460, December 1998. [8] IETF IP Version 6 Working Group (ipv6). [9] http://www.mbone.com [10] Hinden, R. and S. Deering, “IP Version 6 Addressing Architecture”, RFC 2373, July 1998. [11] Blake, S., Black, D., Carlson, M., Davies, E., Wang, Z., and W. Weiss. “An Architecture for Differentiated Services”, RFC 2475, December 1998. [12] Partridge, C., “Using the Flow Label Field in IPv6”, RFC 1809, June 1995.

[13] Funkhouser, T. A., “RING: A Client-Server System for Multi-User Virtual Environments.”, ACM SIGGRAPH Special Issue on 1995 Symposium on Interactive 3D Graphics, (Monterey, CA), 1995, pp.85-92 [14] http://www.csclub.uwaterloo.ca/u/sfwhite/vnet/ [15] Institute of Electrical and Electronics Engineers, International Standard, ANSI/IEEE Std 1278-1993, Standard for Information Technology, Protocols for Distributed Interactive Simulation, March 1993. [16] Macedonia, M.R., Brutzman, D.P., Zyda, M.J., et al., “NPSNET: A Multi-Player 3D Virtual Environment over the Internet”, Proceedings, Symposium on Interactive 3D Graphics, New York: ACM, 1995, pp.93-94. [17] Carlsson, C. and Hagsand, O., “DIVE - A Multi User Virtual Reality System”, IEEE VRAIS, Seattle, Washington, September 18-22 1993 [18] Anderson D. B., Barrus J. W., Howard J. H., Rich C., Shen C., and R. C. Waters, "Building Multi-User Interactive Multimedia Environments at MERL", IEEE MultiMedia, 2(4): 77-82, Winter 1995. [19] Greenhalgh, C. and S. Benford, "MASSIVE: A Collaborative Virtual Environment for Teleconferencing", ACM Transactions on ComputerHuman Interaction, Sep. 1995. [20] Partridge, C., Mendez, T., and W. Milliken, “Host Anycasting Service”, RFC 1546, November 1993. [21] Nichols, K., Blake, S., Baker, F., and D. Black, “Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers”, RFC 2475, December 1998. [22] Jacobson, V., Nichols, K., and K. Poduri, “An Expedited Forwarding PHB”, RFC 2598, June 1999. [23] Heinanen, J., Baker, F., Weiss, W., and J. Wroclawski, “Assured Forwarding PHB Group”, RFC 2597, June 1999. [24] Eraslan, M., and N. D. Georganas, “VESIR-6: Virtual Environment Supporting Multi-user Interaction over IPv6”,.(unpublished) [25] Crovella, M. E., and A. Bestavros,, “Self-Similarity in World Wide Web Traffic: Evidence and Possible Causes”, in IEEE/ACM Transactions on Networking, Vol. 5, No. 6 (December), pp 835-846, 1997. [26] Paxson V. and S. Floyd, “Wide Area Traffic: The Failure of Poisson Modeling”, IEEE/ACM Transactions on Networking, Vol. 3, No. 3 (June), pp 226-244, 1995. [27] Tsybakov, B. and N.D. Georganas, “On Self-Similar Traffic in ATM Queues: Definitions, Overflow Probability Bound, and Cell Delay Distribution”, IEEE/ACM Transactions on Networking, Vol. 5, No. 3 (June), pp 397-408, 1997.

16