MPLS ... - CiteSeerX

0 downloads 0 Views 124KB Size Report
works (VPNs) with Quality of Service have more stringent QoS requirements (low latency .... the total amount of traffic (in byte or packets) offered to the router over time .... configured for the system to react upon the occurrence of predefined events. ... trace. Measured trace. METER. TCPDUMP. Traffic Generator. (brute,rude).
A Soft Real Time Measurement System for Diffserv/MPLS Edge Routers R. G.Garroppo, S. Giordano, F. Mustacchio, F. Oppedisano, M. Pagano, G. Procissi

Dipartimento di Ingegneria dell’Informazione - Universit`a di Pisa, Via Caruso, I-56122 Pisa E–mail : {r.garroppo, s.giordano, m.pagano, g.procissi}@iet.unipi.it, {f.mustacchio, f.oppedisano}@netserv.iet.unipi.it

Abstract The paper presents the architecture of a measurement system designed to reside into DiffServ/MPLS Linux based routers. The goal of this research is to develop an open source software product to be used for management purposes and to be integrated in the network control plane in order to automate resource allocation, admission control and Traffic Engineering functionalities. The system is designed to be flexible, configurable and modular and each module has been implemented in order to minimize the impact of the system itself on the traffic dynamics. The output of the system is a sequence of measurements that report the amount of per-PHB and per-LSP traffic offered to the router over a configurable time window. The system allows to perform prediction on the future traffic activity as well as trigger asynchronous threshold crossing events. The system has proven to be reliable as it passed several functional tests. Preliminary experimental results have shown that the output of the system accurately captures the traffic patterns offered to the router.

1 Introduction Emerging services such as Voice over IP, audio/video streaming, videoconferencing and Virtual Private Networks (VPNs) with Quality of Service have more stringent QoS requirements (low latency and losses, large throughput) than traditional best effort applications (e-mail, http, file transfer). This fact leads to the deployment of network architectures (e.g. DiffServ [1, 2]) which provide basic support to QoS oriented applications. Moreover a considerable effort has been dedicated over the last years to the performance evaluation and optimization of operational IP networks: the so called Traffic Engineering [3]. In this context, the need for optimal resource utilization and traffic performance entails the adoption of dynamic allocation techniques and admission control mechanisms [4]. The focus of this paper is on the design and implementation of a flexible real-time measurement system to be used for management purposes and to be integrated into the network control plane to enable dynamic resource allocation, admission control and traffic engineering on the basis of traffic measurements and predictions. The flexible design philosophy takes into account the following constraints: • system overall modularity provided by socket interconnection among the different modules/functionalities • use of open source code and tools • standard compliancy • interoperability with commercial routers and makes the overall product readily portable on top of different network contexts. The complete set of functionalities provided by the measurement system is accessible via TCP/IP sockets to enable the use of a remote centralized management system. The overall architecture of the measurement system adheres to the client – server paradigm. Such a design choice guarantees a high level of flexibility that the server acts as a unified platform and may be installed on

P77/1

top of any arbitrary number of edge routers; one or many client instances can be hosted by different (even remote and mobile) machines and could be used for totally independent purposes. The end user of the system could be either a Domain Administrator, which monitors/manages the network, or any arbitrary control plane function (such as an Admission Control algorithm, a Resource Allocation module, etc.). In any case, the actual core of the system, along with its complexity, is totally hidden to the end user which interacts with the system through the client only. The rest of the paper is organized as follows. Section 2 describes the main requirements that a measurement system should meet in order to enable management and control plane functionalities. Section 3 describes the overall system architectures together with the modules and functionalities implemented into both the server and the client sides of the system. Section 4 reports the preliminary experimental tests and measurements on the performance of the system, while final remarks are reported in the Conclusion Section that concludes the paper.

2 System Requirements The development guidelines of a measurement system that is to be integrated into the control plane of a quite general network context need to meet a number of common design requirements that are not strictly dependent of the underlying network architecture. According to the server-client architecture of the measurement system, the following common requirements has to be addressed in the project development. • Server-side requirements: – Measurements of the traffic offered to edge routers of a DiffServ/MPLS domain taken over arbitrary and configurable time windows in the past. This information should be available at any time upon request. – Low latency introduced by the system itself to reduce the unavoidable impact of measurements onto network performance. – Sampling of network traffic with accurate timing with low sampling jitter, with no use of busy wait. – Timestamping of traffic samples in order to enable time series processing, including prediction. – Data storage of per-flow traffic time series in a proper database. – Remotization of the measurement system in order to enable remote monitoring and management. – Multiple Client Support in order to enable simultaneous operations, such as traffic monitoring, estimation and prediction. – Quick delivery of information to client/s to prevent that high processing delays impact the relevance of measurements. – Interworking with other router subsystems such as routing, MPLS, RSVP-TE daemon, etc. • Client-side requirements: – Estimation. In order to avoid unnecessary CPU load on the border router, the computation of the estimates must be performed in the client side. – Threshold crossing detection. When the traffic load of a flow overcomes a preconfigured level the network control plane must react accordingly. The occurrence of a threshold crossing event must be notified as soon as possible to the control plane. It is worth discussing a little more about the need for registering in a database the load condition of the edge router. Indeed, the system development would be much easier by just adopting a trivial measurement server which responds to queries from a remote client on the basis of the output of a suitable measurement subsystem. This approach would have reduced the implementation time at the cost of a reduced flexibility and versatility of the project which would suffer from several limitations, such as:

P77/2

• coarse accuracy of the sampling rate, due to the latency introduced by the request network which sums up to the other many variable delay factors; • weak capability of time series analysis/processing. Indeed, typical operations may need several minutes worth traffic data, which would not be available at the time of request; • impossibility, for a network administrator, to have traffic load conditions available at any arbitrary times. The use of a local database effectively overcomes the above listed issues: the price of this approach is, in turn, considerable in terms of design and development complexity.

3

System Architecture: Modules and Functionalities

In this section, the whole system architecture will be presented with particular focus on the description of the modules, their functionalities and interworking operations. Figure 1 shows the whole architecture of the system with the implemented software modules. In the following, the operation of all modules will be described in terms of design implementation and functionalities. BORDER ROUTER

Data from Access Network(s)

Data to the DS-MPLS domain

METER

METERCONTROLLER

Traffic DB

TCP SERVER

TCP CLIENT CIM XML INTERFACE

Client DB CLIENT HOST

Figure 1: Measurement system architecture

3.1

Meter

The most obvious requirement of a measurement system is, naturally, to monitor and register the traffic offered to the host in which it is installed (the edge router, in this case). In particular, it is required the measure of the total amount of traffic (in byte or packets) offered to the router over time windows starting at given (but configurable) sample times with customizable duration in the past. More formally, let A(x, y) be the amount of traffic offered to the router within the time instants x and y, (x < y), let τ be the window length and T be the sampling interval. Then, the output of the measurement system should be the sequence: X(nT ) = A(nT − τ, nT ). As it will be clear in the following, the system will be able to provide a more refined information in that it will give a matrix of measurement: Xi,j (nT ) = Ai,j (nT − τ, nT ), i, j being the PHB’ and LSP’ indexes respectively. In order to keep the system as “light” as possible, the core of the system has been implemented as a kernel module. The module, denoted as meter, intercepts and registers traffic data within the chain of processing of an IP packet in the Linux router. As shown in figure 2(a), the meter is placed in series to:

P77/3

METER

Sampling event

Time

Mangle netfilter hook (set fwmark)

Ingress QDISC (set tcindex)

METER

Routing subsystem

Bytes

Bytes

Bytes

Bytes

Bytes

TStamp

TStamp

TStamp

TStamp

TStamp

Time Window

Kernel Space User Space

Filesystem /proc

Report: Sampling event

Traffic flow offered to the router

Bytes and packets in the window

Control information exchange

Bytes and packets in the window

Measurements

Bytes and packets in the window

(a) Meter module loction

(b) Operational mechanism and traffic report of the Meter module

Figure 2: Meter module location and mechanism 1. Ingress queueing discipline block, devoted to the marking of each packet with a parameter, the so-called tcindex, that specifies its PHB. 2. The mangle table of the PREROUTING hook of Linux netfilter, devoted to the marking of the fwmark parameter which specifies both the output physical interface (in other words, the MPLS virtual interface) on which the packet will be forwarded and which label will be written into the shim header. From the operational point of view, once launched, the meter adds an entry to the /proc filesystem of the host machine and immediately starts observing traffic and collecting measurements on , , and of each intercepted packet. From the user space, it is possible, at any time, to perform a measurement through the filesystem /proc by using the common system calls to access ordinary files. Specifically, the virtual file of interest is /proc/meter/meter timeinterval. The meter responds to the query with a report which specifies, for each fwmark and tcindex (i.e. LSP and PHB), the sampling time, the amount of traffic and the number of packets observed within the window horizon (figure 2(b)). Several extra control parameters are accessible through the filesystem /proc such as the window length currently in use, number of measured packets and the latency that the overall measurement system introduces with a precision of about a CPU clock cycle. As a concluding remark, it should be clear that the meter module has been designed in order to meet two basic requirements: collection of traffic data and timestamping. The very limited functionalities implemented into the kernel space should not surprise: it is wise convention, indeed, to avoid as much as possible modifications of any type to a so crucial component of a system such as the kernel. Indeed, all the features that will be described in the next sections are implemented into the user space and belong to a unique executable application, named metercontroller.

3.2

Metercontroller

As shown in the previous section, the meter module provides traffic measurements for each pair (fwmark,tcindex) with no information on per-LSP and per-PHB traffic load. Although fairly sophisticated, this information is not directly usable by network administrators or automated modules of the network control plane which, instead, need information on per-LPSs and per-PHBs traffic load. As well known, LSPs are identified by means of a LSP-ID, unique within the domain and known by the control plane through RSVPTE.

P77/4

The metercontroller has been implemented to sort out the issue of mapping the fwmark parameter into the proper LSP-ID. The translation of tcindex into a PHB identifier is, instead, straightforward as the correspondence is static; thus, this issue has been solved by just means of a table. The information needed to perform the above detailed translation are retrieved over several system elements, some of them belonging to the kernel (such as the routing subsystem and the MPLS subsystem), other running in the user space (such as the RSVP-TE daemon). Another important target of the metercontroller application is to query the meter at a given sampling frequency and to report the results on the traffic database, from now on referred to as TrafficDB (see figure 1). The sampling operation is performed at the user space avoiding the use of busy waits thus minimizing the overall CPU load of the application. The traffic database consists of a list of traffic arrays indexed by the LSP-ID. Each array is implemented as a round robin database, that is a static length database of length denoted as HISTORY LENGTH, with a pointer to the oldest data. By replacing the oldest element with the newest one and by incrementing the pointer modulo HISTORY LENGTH, an array reporting the most recent HISTORY LENGTH traffic samples (or, in other words, a traffic time series for each LSP) is obtained. In addition to registering data on the TrafficDB, the metercontroller is in charge of sending traffic information to the clients connected to the system by writing onto the file descriptors which logically represent them. Naturally, clients may be interested in different data: they may want to retrieve the whole database content, the information associated with some LSP, the very last traffic sample and so forth. As shown in figure 1, the metercontroller makes these operations possible by a multi-client TCP server which permits external entities to access measured data. In order to present the acquired data, the application should manage a certain number of tasks, which include: • remote logon support; • access control through authentication; • simultaneous connection of multiple clients; • remote management of the system job parameters

3.3

Client

The Client application, implemented in a multithreaded fashion, is able to retrieve traffic information concurrently from multiple border routers. Each thread communicating with an edge router is named Client Interface and the entire application is name CIM (Client Interface Manager). 3.3.1 Client Interface The Client Interface module, shown in figure 3(a), is the building block of the entire client application and is implemented as a multithreaded TCP client responsible of the communication with the measurement system of a single edge router. At the handshake time, which takes place on the CTRL Socket, it negotiates SNAP Socket and RT Socket and then acquires the complete start-up information on installed LSPs, measurement window, sampling rate, etc... After this phase, it generates two more threads named SNAP and RT in figure 3(a), while the first thread is indicated as XML in the same figure. RT Socket and SNAP Socket are connected to RT thread and SNAP thread respectively; every thread asynchronously receives information from its own socket and writes it to the CLIENT DB using the DB ACCESS API (a set of functions which manage the concurrent access to the CLIENT DB). After the handshake phase, the XML thread accepts XML messages from the Client Interface Manager (see below). after inspecting each message, the module decides the appropriate operation to be done: if the message is addressed to the server, then it is simply routed to the server via the CTRL Socket. Otherwise, all messages addressed to the XML thread are validated and processed by the thread itself. By simply issuing an appropriate sequence of XML commands to the server, it is possible to copy an arbitrary portion of the TrafficDB (stored in the server) to the CLIENT DB, via the SNAP Socket; the

P77/5

LSP_ID Upper Threshold Lower Threshold Measured traffic vector Estimators list

Estimator1 Upper Threshold

SNAP

RT

Lower Threshold Estimated traffic vector

XML

DB_ACCESS_API Estimator2

CLIENT_DB

Upper Threshold Lower Threshold

File

(a) Client Interface architecture

Estimated traffic vector

(b) Client database basic structure

Figure 3: The Client Interface CLIENT DB is updated in real time through the information sent – via the RT Socket – by the metercontroller server. The next paragraphs are devoted to detail how the CLIENT DB and DB ACCESS API are designed to support real time monitoring, admission control, resource allocation and traffic engineering. The CLIENT DB the basic data structure used to build the CLIENT DB is shown in figure 3(b). Such a structure stores the following information: • LSP ID: the number which uniquely identifies LSPs in the domain. • Upper and Lower threshold: while the measured traffic activity stays between these two numbers, no alarm is raised; when either the measured traffic exceeds the upper threshold or drops down the lower one, an alarm message is immediately sent to the user. • Measured Traffic Vector: contains the traffic samples collected by the measurement system for the monitored LSP and for every PHB therein. • Estimators List: a list of structures which include: – Estimator, a pointer to a function performing the estimation. – Upper and Lower threshold, while the estimated/predicted traffic activity stays between these two numbers, no alarm is raised; when either the measured traffic exceeds the upper threshold or drops down the lower one, an alarm message is immediately sent to the user. – Estimated Traffic Vector, contains the estimated traffic samples collected by the estimation/prediction functions for the monitored LSP and for every PHB therein. The The CLIENT DB data structure allows to attach an arbitrary number of estimation functions to each of the monitored data flows. Also, appropriate thresholds on both the measured and estimated traffic vectors can be configured for the system to react upon the occurrence of predefined events. Thus, the data structure is able to support: • Measurement Admission Control • Prediction/Estimation based Admission Control • Multiple (alternative) estimation blocks • Domain-wide “traffic engineered” decision-making • Remote and (potentially) mobile domain monitoring

P77/6

BR 3

Access Network

Access Network

MPLS DIFFSERV DOMAIN SNAP XML

RT

DB_ACCESS_API

SNAP XML

SNAP

.....

DB_ACCESS_API

CLIENT_DB Client Interface 1

RT XML

CLIENT_DB

RT

BR 1 BR 4

DB_ACCESS_API CLIENT_DB

Client Interface 2

BR 2

Client Interface N

CIM Threshold Crossing event queue

XML

Available estimators List XML File Admission Control

XML USER INTERFACE

XML User Applications

.....

Resource Allocation

XML Plugins File

Real Time Monitoring

Plugin directory

(a) Client Interface Manager architecture

(b) Client application architecture

Figure 4: The Client Interface Manager and Client application The DB ACCESS API the DB ACCESS API is a set of functions acting as a front-end between the threads receiving data from the server and the local traffic database. When a thread (either RT or SNAP) receives data, the DB ACCESS API takes the following actions: • store the received traffic samples in the appropriate measured vector; • check if a threshold crossing event occurred in the measured vector and (optionally) raises an alarm; • update all the estimated vectors attached to the LSPs; • check for threshold crossing events in the estimated vectors and (optionally) raises an alarm; • writes the local traffic database on a file, making it available to every application running on the host computer. This action is highly configurable since the API can be instructed to transfer data onto various file type or to a SQL database (by means of a SQL transaction). These operations are performed safely, that is by making sure that the other thread cannot access the database at the same time. Furthermore, each of these mechanisms can be enabled or disabled by simply issuing a number of XML commands, making the whole system extremely flexible and performant. 3.3.2 The Client Interface Manager The Client Interface Manager (from now on, CIM), is the application which manages the Client Interfaces connected to the associated domain Edge Routers (as shown in figure 4(b)); CIM’s architecture is shown in figure 4(a) and its operation is to coordinate all modules, mechanisms and events in the client-side. CIM is implemented as a TCP server accepting connections from a local client (User Applications in figure 4(b)) using an XML protocol; this server acts as a user interface and is named XML User Interface (figure 4(a)). A plugin architecture has been implemented aiming at simplifying the development and use of further estimation functions. At start up time, the application scans over a specific directory and load all available estimation plugins, building a list that is available to the user (see figure 4(a)). By means of the XML User Interface, a User Application (see figure 4(a)) can easily make CIM to start/stop a Client Interface (thus starting/stopping collecting information from the corresponding edge router), attach or remove an estimator from an LSP, enable/disable a threshold for an LSP, and so forth.

P77/7

Actual trace

Measured trace

TCPDUMP

METER

PC2 192.168.3.5

Traffic Generator (brute,rude) PC1 192.168.3.1

Figure 5: Testbed used for the experimental analysis Furthermore, CIM makes available a data structure (Threshold Crossing Event Queue in figure 4(a)) in which each Client Interface can enqueue a record whenever a threshold crossing event occurs. Such a record provides information such as the LSP associated with the occured event, the type of event associated with threshold crossing (upper or lower threshold crossing), whether the event occurred in the measured vector or in one of the estimated ones, etc.. Every time a record is enqueued, CIM sends immediately an XML message to the user so that it may take the proper counter-action. When the user interface sends an XML message, it can be addressed either to the CIM or to one of the active Client Interfaces. In the former case, CIM itself will validate and process the message, while in the latter it will forward the message to the appropriate Client Interface which, in turn, will perform the operations described in section 3.3.1. All traffic data coming from the various active Client Interfaces is dumped onto one or more files and is available to every application running on the same host where CIM is active. The adoption of such an architecture allows a complete knowledge of the traffic load offered to the domain; moreover, this information can be retrieved from any network element running a CIM instance.

3.4

The User Application

This module represents the last step towards the complete support of any control plane activity based on the knowledge of the traffic load offered to a DiffServ/MPLS domain from the access networks. It is not defined here because its structure is totally generic and it has only to be able to send and receive XML commands via a TCP/IP socket and to access and parse a file to read traffic data. As figure 4(b) suggests, the User Application can be an Admission Control application, a Resource Allocation module, a monitoring/management Graphical User Interface (GUI) and so on.

4

Test and Measurement

The most critical point of the described measurement system is the sampling module integrated within the metercontroller application. In particular, the accuracy of traffic data is heavily influenced by the sampling module performance. Indeed, since this module is entirely software implemented without the use of busy waits and of processes with high priorities, it is drastically penalized from the insufficient accuracy of the library function nanosleep. Such negative aspect implies an accuracy error which is increasing with the workload of the router kernel, integrating the measurement system. To have an idea of the nanosleep inaccuracy and to develop adequate counter strategies, we have carried out a set of measurement tests. The testbed considered to test the system prototype is shown in figure 5; it is composed by two PCs connected by means of a Fast Ethernet switch. Each PC is equipped with a Fast Ethernet NIC based on Realtek 8139 chipset.

P77/8

The PCs have the following hardware features: • PC 1 AMD AthlonXP 2000+ , 512 MB RAM memory, O.S. Slackware Linux 9.0 [5], kernel 2.4.20 (MPLS Patch), network address 192.168.3.1 • PC 2 AMD K6 II 400 MHz, 196 MB RAM memory, O.S. RedHat Linux 8, kernel 2.4.20 (MPLS Patch), network address 192.168.3.5 The traffic loading the measurement system has been generated by means of the software traffic generator rude [6] version 1.62, which permits to generate packets starting from a file containing the interdeparture time and the size of each packet to transmit. This way, VBR traffic is generated by reproducing traffic data acquired by using software sniffers such as tcpdump [7]. Moreover, CBR traffic has been generated by using the application brute [8]. The experimental tests have been carried out by varying the following parameters: • the bitrate, in order to verify the accuracy of the meter module under different traffic load conditions; • the packet size, in order to test the timing system integrated into the metercontroller under different frequencies of interrupt requests. It is relevant to remind that, to achieve the same bitrate, the shorter the packet size, the higher the packet rate is needed.

4.1

Meter performance

The implementation of the meter module as a hook of the netfilter of the O.S. Linux, makes it manage all the packets transferred from the NIC to the IP layer of the kernel. This observation permits to deduce that the module does not drops any packet since each time a packet enters in the kernel, it is normally processed by every hook of the netfilter. On the contrary, if the kernel does not accept a packet, the module cannot “see” it and therefore the information on this packet will be lost. It is worth considering the case of a packet that can be accepted by the kernel, can be measured by the meter and then discarded by the output scheduler. This case does not apply to the proposed system, since it is designed to measure the traffic offered to the router only, regardless of its successful forwarding in the MPLS domain thereafter. In our tests, we address first the to evaluation of the latency added by the meter module to the single packet. To quantify the latency, a quite objective performance parameter is the number of clock cycles that a packet spends within the module. This number is quite independent of the hardware features of the equipment hosting the measurement system, as long as the O.S. kernel can manage the interrupts whose frequency should allow the CPU to complete packet processing with no preemption. Indeed, consider the case of a packet arriving at the NIC before the kernel has completed the processing of the previous packet. In this case, the routine managing the interrupt stops the active task which resumes when the routine has completed. As a consequence, the permanence time (expressed in number clock cycles) in a hook of a packet can be significantly larger than that commonly needed. Figure 6(a) shows this phenomenon as observed in the meter module installed in the router PC2. The figure has been obtained by carrying out several experiments with CBR traffic at different packet rates and by averaging the delay over the number of received packets. In addition, the figure reports the percentage of CPU occupation that was observed at different packet rates. The experiment duration was 30 seconds. Also, figure 6(a) shows that the delay is nearly constant as long as the CPU occupation is below 100% while, thereafter (about 40000 packet per seconds – pps – in the specific case), the module propagation delay grows. Indeed, at full load, the processor cannot effectively handles all the interrupts and the ordinary processing of packets at the data plane level (forwarding, policing, marking etc...) is dramatically impaired. An analogous phenomenon occurs even with more powerful processors: naturally, the threshold beyond which the delay increases, is higher. The use of dynamic memory in the implementation of the meter does not modify the qualitative behavior above described: the latency, though, is more than ten times higher. To summarize this discussion, when the router operates in normal conditions, the processing time of packets due to the meter module is of the order of hundreds of clock cycles. This results in a negligible latency which, moreover, decreases as the router CPU capacity increases.

P77/9

700

60000 latency cpu usage

tcpdump meter

55000

600 50000 45000 40000

400 Bytes

cpucycles / cpu%

500

300

35000 30000 25000

200

20000 100 15000 0

10000 0

20000

40000

60000

80000

100000

0

500

1000

1500

pps

2000

2500

3000

seconds

(a) Permanence time of the packet in the meter (clock cycles)

(b) Comparison with tcpdump: videoconference session

50000

tcpdump meter

50000

tcpdump meter

45000 45000 40000

35000 Bytes

Bytes

40000

30000

35000 25000

20000

30000

15000 25000 0

20

40

60

80

10000 2700

100

2720

2740

seconds

(c) Comparison with tcpdump (videoconference session: 100 seconds detail)

2780

2800

(d) Comparison with tcpdump (videoconference session: 100 seconds detail)

1.2e+07 meter tcpdump 1e+07

8e+06

Bytes

2760 seconds

6e+06

4e+06

2e+06

0 0

20

40

60

80

100

120

140

seconds

(e) Comparison with tcpdump on traffic measurements (FTP traffic)

Figure 6: System performances

P77/10

4.2

Overall measurement server performance

From a higher level point of view, the whole server performance was first assessed by using it to monitor a videoconference session traffic flow. Figures 6(b), 6(c) and 6(d) report the sequence of samples taken on-line by the system together with the ideal sequence of samples obtained by postprocessing of data acquired by tcpdump. The results confirm the ability of the measurement system in closely capturing the actual traffic dynamic even in case of very quick changes of the traffic pattern, as proven by figures 6(c) and 6(d) which zooms a 100 sec. slice of the overall trace of figure 6(b). Analogous comments apply to figure 6(e) which shows the result of a similar test carried out to monitor the traffic produced by a file transfer session.

5

Conclusion

The paper describes in detail the architecture of a flexible real time measurement system designed to resides into DiffServ/MPLS Linux based routers. The goal of this research was to develop an open source software product to be used for management purposes and to be integrated into the network control plane to enable resource allocation, Admission Control and Traffic Engineering functionalities. The system is designed to be configurable, modular and to minimize the impact of the system itself on the traffic dynamics. According to configurable sampling frequency, the system produces a sequence of per-PHB and per-LSP traffic measurements taken over a customizable time window. The system has passed several functional tests and preliminary experimental results show that it accurately captures the traffic patterns offered to the router.

6

Acknowledgment

This paper is partly supported through the FIRB – MIUR projects TANGO and VICOM.

References [1] S.Blake et al. An Architecture for Differentiated Services, IETF RFC 2475, December 1998 [2] B.E. Carpenter, K.Nichols. Differentiated services in the Internet, Proceedings of the IEEE, vol.90, n.9, September 2002 [3] D.Awduche et al. Overview and Principles of Internet Traffic Engineering, IETF RFC 3272, May 2002 [4] H.L.Lu, I.Faynberg. An architectural framework for support of quality of service in packet networks, IEEE Communications Magazine, June 2003 [5] The Slackware Linux Project – http://www.slackware.com/ [6] Rude & Crude – http://rude.sourceforge.net [7] Tcpdump public repository – http://www.tcpdump.org/ [8] BRUTE (Brawny and Rough UDP Traffic Engine) – http://netgroup-serv.iet.inipi.it/brute

P77/11