Exploiting Unicast vs. Multicast Delivery Tradeoffs ... - Semantic Scholar

5 downloads 203 Views 348KB Size Report
approaches typically require the use of dedicated servers near the access network to ..... rive in bursts (for instance, during globally shared surfing periods), the ...
Exploiting Unicast vs. Multicast Delivery Tradeoffs to Improve the Latency Performance for IPTV Channel Change Aytac Azgin and Yucel Altunbasak School of Electrical and Computer Engineering Georgia Institute of Technology Abstract—In IPTV networks, channel change latency represents a major obstacle in achieving broadcast-level quality video delivery. Since the content for channels, other than the one currently viewed, is not readily available at the client side, to receive the content corresponding to these channels, each request needs to go through the network, leading to significant, and oftentimes unacceptable, delays. To minimize this latency, various approaches have been proposed. These approaches typically require the use of dedicated servers near the access network to speed up the delivery of channel change packets. In this paper, we use such server-driven framework to develop a joint unicast- and multicast-based channel change protocol that relies on the use of (i) sessiondependent multicast channel change streams to deliver the key-frame content to the zapping clients, and (ii) a dedicated unicast streaming server, which is referred to as the Channel Switch Coordinator (CSC), to deliver the channel change packets corresponding to the non-key-frames. In our framework, CSC server is also responsible for coordinating the received channel change requests and optimally allocating system resources for each received request.

I. Introduction Channel change latency has a critical importance in evaluating the end-to-end performance of an IPTV network. Hence, reducing channel change latency represents a major design goal for the IPTV service providers to deliver broadcast quality content to IPTV clients [1]. To achieve this objective, various approaches have been proposed (for further details, see [2]). The majority of the approaches proposed to reduce the channel change latency are server-driven approaches, as they require the use of dedicated servers to deliver the channel change packets. In general, we can categorize these approaches as multicast-driven [3] or unicast-driven [4] techniques. For the first case, dedicated servers are used to create time-shifted multicast channel change streams that replicate the content delivered over the source multicast. By delivering these shifted replicas over different multicast sessions, clients can initiate the decoding process earlier than the actual decoding time associated with the source multicast. That is because, without using time-shifted multicast streams, clients who miss the delivery of key-frame packets from the source multicast need to wait for the delivery of the next key-frame through the source multicast. Therefore, additional streams ensure earlier access to channel change content for clients whose requests arrive late. In short, by utilizing additional channel change streams, we can significantly reduce the overall channel change latency.

For the second case, dedicated servers are used to respond to each channel change request individually, by creating distinct unicast streams for each received request, allowing the clients to initiate the decoding process earlier when compared to waiting for the delivery of the same content over the source multicast. 1 So far, latency performance results for each of these server-driven techniques suggest significant potential for achieving the desired service quality levels. However, because of the inherently inefficient bandwidth usage characteristics of these approaches, they are capable of supporting the desired service quality levels for a limited set of scenarios. Specifically, for the unicast-based approach, dedicated server is usually expected to deliver the channel change content for a few seconds before the channel switching user synchronizes with the source stream. To minimize the synchronization latency associated with the join event, the dedicated server needs to allocate more instantaneous resources for each received request, thereby, significantly limiting the number of requests that can be served simultaneously within a short time-frame. As a result, success rate for the fast channel change process can quickly degrade as the system starts to carry a larger number of clients and/or experiences more frequent channel change requests. We observe similar performance limitations for the multicast-based approach. Specifically, for the multicast-based scenarios, bandwidth limitations at the multicast streaming server limits the number of active multicast streams that the server can create within a short timeframe. To achieve the desired latency performance, we may need to increase the minimum number of active channel change streams beyond the servicing capacity of the dedicated server, resulting in further performance degradations. To sum up, the effectiveness of these approaches strictly depends on the number of clients that can be supported with the given fast channel change policies. To make the best use of these approaches, we need to come up with more scalable solutions. In this paper, we address these problems by developing a unified channel change framework that combines the strengths of both of these approaches to achieve the op1 Note that, to limit the amount of data transmitted from the channel change server, transmission rates for the unicast streams are selected to be higher than the rate of source multicast. Hence, channel change overhead at the dedicated server strictly depends on the bandwidth availability at the client side. The earlier the client can join the source multicast, the lower the overhead will be.

timal tradeoffs between channel change latency and resource utilization. To achieve our objective, we dynamically allocate the resources to unicast and multicast channel change streams depending on the request arrival rate and resource availability. Within the given framework, multicast streams are utilized for the delivery of key-frame packets and unicast streams are utilized for the delivery of non-key-frame packets. The rest of the paper is organized as follows. In Section II, we present our system model. We explain the operational details of our channel change framework in Section III. In Section IV we explain in detail the approaches we use to determine the channel change parameters. We summarize and discuss the potential uses for the proposed framework in Section V. II. System Model We illustrate our system model in Figure 1. There are three main components for the proposed channel change framework, which consists of two dedicated servers and a multicast control (MC) channel. The first server, referred to as the Channel Switch Coordinator (CSC), is a unicast streaming server that is used to deliver non key-frame packets (i.e., P and B frame packets), whereas the second server, referred to as the Multicast Channel Switch (MCS) server, is a multicast streaming server that is used to deliver keyframe packets (i.e., I frame packets) to IPTV clients. 2 Multicast Control Channel

IPTV User Control Messages

Multicast Channel Switch Server

III. Channel Change Protocol

Unicast Data & AL-FEC Unicast Control Multicast Control Channel

Multicast Data

Channel Switch Coordinator

Channel Change Request

Multicast Proxy Server

IGMP Control

Zapping User

cannot be accurately predicted from the previously received frames. Since a key-frame is used as a reference frame for the subsequent frames within the same group of pictures (GOP) sequence, any channel change request that requires additional packets to be transmitted has to be first responded with the delivery of key-frame packets. Furthermore, since the key-frames are typically encoded at a higher rate (when compared to the other frames within the same GOP sequence), to improve the resource usage efficiency at the channel change servers, multicast streams are typically preferred over unicast streams to deliver the key-frame packets to the clients. • To achieve resource efficient delivery for the non-keyframe packets, unicast delivery techniques are typically preferred over multicast-based techniques. Since the number of non-key-frame packets delivered during the channel change process typically differs from one user’s request to another’s, delivering these packets over multicast streams leads to inefficient resource utilization, and so, it is not desired. Instead, to deliver the non-keyframe packets, we need to use a unicast-based transmission strategy. In doing so, it also becomes possible to skip the delivery of less important frames (e.g., B-frames) with acceptable level of degradation in the perceived QoE. To effectively support the channel change process, CSC server generates a control channel, referred to as the MC channel, to deliver regular updates on active multicast channel change (MCC) streams to the IPTV clients. The information delivered over the MC channel is used by the IPTV clients to determine the key-frame delivery times for the sessions that are supported with the fast channel change process. 3

IPTV Multicast

Fig. 1. Proposed peer supported fast channel change framework.

The reasons for implementing the diversity-driven framework shown in Figure 1 can be explained as follows: • In multimedia networks, decoding process for the delivered streams can only initiate after the client successfully receives the key-frame packets from the content delivery server. Since key-frames are decoded using intra-coding, information carried within these frames 2 Even though the CSC and the MCS servers are considered as separate entities in Figure 1, in general, we can consider the two servers as a single entity.

Assume that the set of active MCC streams is already determined by the MCS server and the information on each of them (i.e., Session ID, delivery timing, transmission rates) is forwarded to the CSC server and the IPTV clients. Let us assume that a client uν makes a channel change request from its currently viewed session Sessi to session Sessj . We assume the request to carry the following information: Session ID for the current session, Session ID for the targeted session, bandwidth availability at the client side, and local time of request. 4 After receiving the channel change request, CSC server needs to determine the following list of parameters: MCC stream to deliver the key-frame packets to uν , time to join Sessj ’s source multicast, the set of packets to be delivered by the CSC server, and the transmission rate to be used by the CSC server to deliver the requested channel change packets. 3 As long as the clients have access to the two most recent MCC update messages, timing information for the MCC streams can be accurately predicted using the messages delivered over the MC channel. 4 Using the information carried within the request messages, CSC server can easily determine the client-perceived decoding deadline for the given request.

After the CSC server receives uν ’s channel change request, the server first checks whether or not there is an active channel change stream for Sessj . If the CSC server determines that, for Sessj , no MCC stream is available to deliver the key-frame packets, then the server calculates accordingly the minimum level of support required for the delivery of channel change packets before the deadline associated with uν ’s request expires. Note that, for the current case, channel change packets can only be transmitted by the CSC server and using a unicast stream. Otherwise, if there is at least one active MCC stream available to deliver the key-frame packets, then it becomes possible to implement both the unicast and the multicast streaming approaches to deliver the channel change packets. In either case, if the resources available at the server and client sides are not sufficient to deliver the requested packets to uν in a timely manner, then the request is rejected by the CSC server. If the request is rejected by the CSC server, then uν initiates the channel change process by immediately joining Sessj ’s source multicast. On the other hand, if the server accepts the request, then it estimates the bandwidth requirements associated with the request and makes the reservation accordingly. The server then sends a request accept message to uν with information on the join time to the MCC stream and the synchronization time with Sessj ’s source multicast. To improve the latency performance for the given request, CSC server also starts the delivery of the channel change packets as soon as it sends the request update message to the zapping client. kth MCC IFrame

Tk k+1th Request Interval I

TJ,s

Tk * k+1th Req. Int. II

k+1th MCC IFrame

Tk+1

Tk+1*

k+2th Request Interval I

Fig. 2. The relationship between time of request and decision intervals.

In Figure 2 we illustrate the timing for the channel change process when a single MCC stream is available for the delivery of key-frame packets to uν . For the given MCC stream, frame transmission rate is given as κmcc key-frame transmissions per GOP duration. In Figure 2, Tk and Tk∗ represent the start and finish times for the kth key-frame transmission, and TJ,s represents the maximum latency for joining the MCC stream. The only exception to the above scenario is that, if the user can partially receive the key-frame from the source multicast, then the user receives the remainder of the key-frame packets directly from the CSC server. As shown in Figure 2, we divide each key-frame delivery period into two request intervals. Within the proposed framework, a key-frame delivery period corresponds to a timeframe of [Tk − TJ,s , Tk+1 − TJ,s ], where 1 ≤ k ≤ κmcc . We refer to these request intervals as Request Interval I or R`II , and Request Interval II or R`III . Here, R`II corresponds to the interval where the channel switching client uν imple-

ments a Wait period before initiating the process to join the MCC stream. On the other hand, if a request is made during R`III , then no explicit Wait period is implemented by the client before joining the targeted MCC stream. Also note that, for R`II , the duration of the waiting period depends on various factors, such as the bandwidth availability at the CSC server, or the arrival time for the request. We will explain this decision process in detail in the next section. IV. Channel Change Parameter Estimation Parameter estimation process for the proposed framework proceeds in two phases: resource allocation phase at the CSC server and stream generation phase at the MCS server. We first explain the resource allocation process at the CSC server. Then, we explain the method we use to generate the MCC streams. We conclude this section by discussing the possible use of access point buffers to further improve the channel switching performance. A. Resource allocation phase at the CSC Server Minimum channel change latency is achieved when the channel change packets are delivered to the IPTV clients at the downlink capacity, which refers to the maximum allowed transmission rate at the client side. To support this objective, instantaneous channel change request arrival rate to the CSC server needs to be less than the minimum servicing threshold at the CSC server. 5 However, since the channel change requests oftentimes arrive in bursts (for instance, during globally shared surfing periods), the pseudo-arrival rate for the channel change requests can easily exceed the minimum servicing threshold at the CSC server. When that happens, CSC server starts to reduce the reserved bandwidth for each admitted request to a value that is typically less than the rate of source multicast. Due to expectations on the servicing quality, there are a few restrictions on how we can (re-)assign the delivery rates. Specifically, for each accepted channel change request, we need to satisfy the following inequalities: ∑ Wν (t) ≤ WS,max (1) ∀ν∈N

Wν (t) ≤ Wν,max ∫ (i) Wν,min × Lmax (ν, i) ≤ Wν (t)dt

(2) (3)

Tν,i

where Wν (t) represents the bandwidth allocation at the CSC server at time t for uν ’s active channel change request(s), WS,max represents the transmission capacity of the CSC server, Wν,max represents the downlink capacity (i) at the access network for uν , Wν,min represents the minimum allowed delivery rate that satisfies the channel change requirements for the ith request by uν , Lmax (ν, i) represents the maximum allowed synchronization latency for the given 5 Specifically, λ csc ≤ Wcsc /E[Wu,max ], where λcsc represents the request arrival rate to the CSC server, Wcsc represents the downlink capacity at the CSC server, and E[Wu,max ] represents the average downlink bandwidth availability at the client side.

request and Tν,i represents the guaranteed synchronization timeframe for the given request. 6 The following procedure explains how the resources are allocated at the CSC server after each incoming channel change request: • Case I: ∆WS,max + Wν,max ≤ WS,max There are sufficient resources at the CSC server to support all the active channel change requests, including uν ’s request, at the maximum allowed bandwidth (from the client’s perspective). Here, ∆WS,max represents the currently reserved resources at the CSC server and its value ∑ equals W uj ,max , where ϕcsc represents the active j∈ϕcsc request set at the CSC server. For the current case, CSC server starts delivering the channel change packets to uν using the designated rate of Wν,max . 7 • Case II: ∆WS,max + Wν,min > WS,max and WS,min + Wν,min ≤ WS,max There are sufficient resources at the CSC server to support the minimum bandwidth requirements for the active channel change requests, including the most recent request from uν . However, the resources are not sufficient to deliver the channel change packets at the desired maximum rate (i.e., Wu,max , ∀u ∈ Uϕcsc , where Uϕcsc represents the set of active channel switching clients). For the current case, we distribute the available resources at the CSC server by exploiting the relationship between bandwidth allocation and expected latency. We assume this information to be available at the CSC server, before the server initiates the decision process. 8 Assume that the impact of bandwidth usage on latency performance is evaluated in discrete space for a limited set of bandwidth values, each of which is separated by a constant, which we refer to as δ(ω). We present the pseudo-code for the proposed bandwidth reallocation process in Algorithm 1, where Uϕ∗csc represents a subset of the active channel switching clients, for which a decrease of δ(ω) in the reserved bandwidth does not violate the servicing constraints for the requests, and L(W ) gives the latency when the bandwidth allocation equals W . Because of the differing delay measures-between clients and the CSC server-, the CSC server uses multiple L(·) functions to take into account these variations. After the CSC server makes the assignments for the initial delivery rates, the next step in our decision framework is to determine the latency-optimal approach to deliver the channel change packets. Let us assume that Wν∗ represents the bandwidth allocated for servicing ν’s channel change request. Depending on the value of Wν∗ , there are two possible scenarios: ( i) WM ≤ Wν∗ < Wν,max → if Wν∗ is greater than WM , then the client can join the source multicast after it receives the key-frame packets from the MCC stream. 6 Here, we can dynamically vary the value of L max to improve the channel change efficiency. 7 If the CSC server receives k distinct requests from a single client (u), then we set Wu,max to 1/kth of its original value. 8 Note that, by gathering information on channel switching clients, CSC server can create a database of the most likely mappings with respect to the relationship between latency performance and bandwidth usage.

Algorithm 1 Resource reallocation at the CSC server request(i) → uν (v) Wν = 0 (v) (v) while Wν < Wν,min do u∗ = min∀u∈U ∗

(u )

ϕcsc

(v)



(v)

= Wν

(u )

L(Wui j − δ(ω)) − L(Wui j ) (u∗ )

(u∗ )

+ δ(ω) and Wu∗ j = Wu∗ j − δ(ω) i

i

end while (v) (v) Wν∗ = Wν and ∆Wν = Wν (v) ∗ while Wν < Wν,max and ∆Wν > 0 do u∗ = min∀uU ∗

ϕcsc (uj ) ui

(u )

(u )

L(Wui j − δ(ω)) − L(Wui j )

(u ) − δ(ω)) − L(Wui j ) (v) (v) Lν = L(Wν ) − L(Wν + δ(ω)) (v) if Lν > Lu and Wν∗ + δ(ω) ≤ Wν,max then (v) ∗ Wν + δ(ω) ≤ Wν,max and ∆Wν = δ(ω)

Lu = L(W

else ∆Wν = 0 end if end while

( ii) Wν∗ < WM → If Wν∗ is less than WM , then to achieve the optimal tradeoff in latency performance and resource utilization, client needs to join the source multicast immediately. However, to support this approach, delivery rate along the corresponding MCC stream needs to satisfy the following condition: Wmcc < Wν,max − WM . In doing so, we can ensure that the client can receive the MCC and the source multicast streams simultaneously without overloading the downlink channel at the access network. There is one exception to the above scenario. If Wν∗ + Wmcc > WM and the client can immediately start receiving packets from the MCC stream (after making its request), then to catch up with the source multicast earlier the client waits till the end of the key-frame transmission period along the MCC stream before joining with the source multicast. In the next section, we discuss in more detail the approach we use to assign the MCC delivery rates. • Case III: WS,min + Wν,min > WS,max There are not enough resources at the CSC server to accommodate client’s channel change request, hence, for the given case, the server rejects the received request. § B. MCC Stream Generation Phase at the MCS Server The process to generate the MCC streams is a crucial step in achieving the desired tradeoffs between servicing capacity and channel change latency. We can improve the latency performance as we increase the frequency of key-frame transmissions along the MCC streams. If the arrival rate for the channel change requests is sufficiently high, we may also observe significant improvements in channel change overhead. On the other hand, if the arrival rate for the channel change requests is small, then we may observe an increase in channel change overhead with the use of additional MCC streams. To optimize the resource utilization in the network, we need to find the right balance between these two

scenarios. In this section, we have three objectives. First, we need to determine whether or not a given session requires an MCC stream. Secondly, if an MCC stream is required to deliver key-frame packets for a given session, then we need to determine the number of streams that the MCS server needs to generate. Lastly, for each MCC stream that the MCS server generates, we need to determine the corresponding transmission rate. Note that, to increase the resource usage efficiency during the channel change process, we need to adapt the number of MCC streams to the session load (i.e., instantaneous session join rate). We can achieve this by assigning higher number of streams to sessions with higher join rates, and vice versa. In doing so, we can effectively reduce the channel change overhead while also keeping the synchronization latency associated with the channel change process within the targeted range. Figure 3 illustrates a typical relationship between session join rate and resource optimal key-frame transmission rate when we assume a constant inter-frame spacing value for the key-frame delivery times. We can use this relationship to find out whether there is any need to generate MCC streams and, if that is the case, the number of streams to generate.

Key frame transmission rate

9

Session join rate Fig. 3. Relationship between optimal key-frame delivery rate and session join rate.

Therefore, we can state our objective function, which represents the expected overhead when k key-frames are delivered-along the MCC streams-for a given session, as follows: Ψk (λ) = λ × ψk∗ + k × nI × lp (4) where λ represents the request arrival rate for a given session, nI represents the average number of data packets generated for a key-frame, lp represents the size of a data packet, and ψk∗ represents the average overhead per request when k key-frames are delivered along the MCC streams (where k ≥ 0). Since the bandwidth requirements for the channel change requests are determined by taking into account the available bandwidth information ∑at the client side, we can express the equation for ψk∗ with ∀w∈WU pw ψk (w), where WU represents the set of possible available-bandwidth values at the client side, pw represents the probability for a client to have a bandwidth availability of w, and ψk (w) 9 Note that, we determine the number of MCC streams required for the channel change process using the information on optimal key-frame transmission rate and the data rate along the MCC streams.

represents the server overhead at k-key-frame transmission rate when the client bandwidth equals w. 10 In short, to find the resource optimal k ∗ parameter, we use the following equation: k ∗ = min ∀k

Ψk (λ) χk

(5)

where χk represents the ratio of requests being admitted to the CSC server when the MCC streams are used to deliver k key-frames per GOP sequence. Here, request admission rate allows us to indirectly integrate the impact of unserved fast-channel-change requests on the performed calculations to find the expected bandwidth usage. To improve the accuracy of our decisions, reliable estimates for the time-varying session join rates at the CSC server are required. The time-varying characteristics for the session join rates have been previously investigated in various studies (for a more detailed discussion, see [5, 6]). The analysis performed by the authors in [5, 6] suggested similar results for the clients’ channel switching activities even though the observational data is collected from two different networks (one in United States, [5], and the other in Sweden, [6]). In general, channel switching rates are shown to exhibit a repetitive pattern with trend-like features, with higher (or lower) activities occurring during the evening (or day), and peaks occurring at the start of the hour and half-hour (which corresponds to the globally shared channel switching periods). Considering that the clients’ channel switching activities also display seasonal changes or trends within, we can then use double or triple (Holt-Winters) exponential smoothing approaches to estimate the future behavior for the channel switching rates corresponding to each active session. In short, the pseudo-code for the main stream generation ˆ j represents procedure is presented in Algorithm 2, where λ ˆ j ) reprethe request arrival rate estimate for Sessj , Ξ(λ sents the resource-optimal key-frame transmission rate at the given request arrival rate, ξ(kj∗ ) represents the stream count for the given key-frame transmission rate, ∆Wmcc represents the current bandwidth allocation for the MCC streams, and Ωmcc represents the maximum allowed bandwidth allocation for the MCC streams. 11 Here, resource assignment procedure is similar to the approach we proposed to achieve latency-optimal bandwidth (re-)allocation at the CSC server. Specifically, when the requested resources exceed the available resources, we reduce the key-frame transmission rate for the MCC streams starting with the stream for which the removal of a key-frame transmission has the minimum impact on the channel change overhead. We next explain the approach we proposed to find the latency-optimal data transmission rates for the active MCC streams. Assume that W s represents the expected bandwidth availability per request at the CSC server, W u represents the expected bandwidth availability at the client side 10 To determine the distribution function for the client bandwidth availability values, we can use the information carried within the channel change request messages. 11 Depending on the network state, we can dynamically vary the maximum allowed transmission capacity for the MCC streams.

Algorithm 2 Process to generate the MCC streams ˆ j , ∀Sessj ∈ {M } find λ for j = 1 : M do ˆ j )! = Ξ(λj ) then if Ξ(λ update kj∗ if ξ(kj∗ ) > ξj then create a new MCC stream(s) for Sessj else if ξ(kj∗ ) < ξj then remove MCC stream(s) for Sessj end if end if end for if ∆Wmcc > Ωmcc then reallocate resources at the MCS server end if

ets along the MCC streams of a given session, we require one final modification to further reduce the completion time of a successful key-frame delivery event. We illustrate the proposed modifications in Figure 4, where we reorder the key-frame packets at the MCS server before sending them through the MCC streams. 14 Circular packet reordering along the MCC streams allows the clients to continuously receive the key-frame packets with shorter waiting periods, regardless of the time the request is made. Original key-frame transmission blocks

Reordered key-frame Client joins transmission blocks the MCC stream

1 2 3 4 5 6 7 8

, and ξ ∗ represents the default value for the maximum number of key-frame transmissions per MCC stream (when the transmission rate equals source multicast rate, WM ). We can then summarize our approach to update the data transmission rates as follows: • W s > WM : Since the servicing rate is greater than the source multicast rate, no change is required for the data transmission rate along the MCC stream. • W s < WM : Since the servicing rate is smaller than the source multicast rate, we first do an initial check on the number of key-frame transmissions for the given session. Assume that ζ represents the currently selected key-frame transmission rate and ζ ∗ represents the maximum allowed key-frame transmission rate on a single MCC stream per GOP duration. If ζ > ζ ∗ , then there is a constant flow of data along the MCC streams. If that is the case, then no change is required for the transmission rate along the MCC stream, i.e., Wmcc = WM . However, any client that makes a channel change request for the given session immediately joins the MCC stream upon receiving the update message from the CSC server. If, on the other hand, ζ ≤ ζ ∗ , then we update the transmission rate along the MCC stream ∗ using Wmcc = WM × ζ/ζ ∗ . Next, we check whether the ∗ is greater or less than the value of value of W s + Wmcc ∗ WM . If W s +Wmcc ≥ WM , then no further change is re∗ ∗ quired on Wmcc . 13 Otherwise, we update Wmcc using ∗ min(Wmcc , W u −WM ) and require the client to immediately join both the source and MCC stream multicasts after receiving the request update message from the CSC server. Also, whenever there is additional bandwidth availability along the client’s downlink connection, CSC server uses it to deliver the remainder of the requested channel change packets. Lastly, if there is constant flow of channel change pack-

3 4 5 6 7 8 1 2

ith MCC stream

12

∑ To calculate the value of W u , we use ω≥δω πω ω/δπ , where δπ = P (Wu ≥ δω ) and πω = P (Wu = ω ± ∆ω). To make the calculations for W u , we use a subset of the active clients. Specifically, using the received request messages, CSC server first creates a distribution for bandwidth availability values at the the client side. Then, the server uses the set of clients with higher bandwidth availability, i.e., the top πω of them, to estimate the value of W u . 13 Similar to the previous case, channel switching client joins the MCC stream as soon as it receives the request update message from the CSC server. 12

jth MCC stream

6 7 8 1 2 3 4 5

8 1 2 3 4 5 6 7

Fig. 4. Example of circular reordering applied on the MCC streams.

C. Impact of Buffer Availability at the Access Point In this section, we extend our initial framework to allow for additional capacity usage at the access point during the channel change process. We mentioned earlier that to increase the efficiency of the channel change process (i.e., using less resources or achieving lower latency), we need to keep the downlink bandwidth usage at the client side at a value that is higher than WM . Note that, here, bandwidth usage includes delivery from all possible sources, i.e., CSC server, MCC stream, and source multicast. We also mentioned that, to optimize the latency performance, delivery rate for the channel change packets needs to be higher than WM . In the previous section, we tackled this problem by dynamically varying the data transmission rate along the MCC streams. For the given solution, the approximation we used to represent expected bandwidth availability at the client side (i.e, W u ) plays a critical role on the perceived latency performance. For instance, choosing a small value for W u can significantly limit the number of requests that can take advantage of the adaptive MCC delivery rates. To overcome this limitation, we propose a solution that takes advantage of the receive buffers at the access points and allows for short-term increase in the delivery rate to the channel switching clients. Using this approach, we can increase the ratio of clients that can immediately join the source and MCC stream multicasts, and, as a result, shorten the channel change latency and reduce the overall overhead (because of synchronizing with the source multicast earlier). We next explain the approach we used to integrate this concept into our initial channel change framework. Assume that QB,ν represents the maximum number of data packets that the access point for client uν (which we refer as APν ) is allowed to keep in its buffers, for traffic directed at uν . As long as the incoming traffic to APν does not generate a queue size of Qν , where Qν > QB,ν , then APν can deliver the received packets to uν without fail. We 14 In Figure 4, instead of packets, we use blocks of packets, each of which consists of an equal number of packets.

can therefore state the relationship between buffer allowance and client-side capacity usage as follows: ∫ ( ) Wν (t) − Wν,max dt ≤ QB,ν (6) TQ

where TQ represents the observation period. In general, we can limit the size of TQ to the maximum allowed channel change period. 15 Lastly, to ensure stability at the access network, [ ]we also need to satisfy the following relationship: E Wν (t) ≤ Wν,max . Using the above relationships, we update the initially assigned MCC stream rate as follows: Wmcc = min(WM , W u − WM +

QB,u × (W u − WM ) ) (7) WM × TI

where QB,ν = QB,u , ∀uν ∈ {N } and TI represents the keyframe duration for the given session. Note that, we can ( ) ∗ also replace W u with W u = W u × 1 + QB,u /(WM × TI ) − QB,u /TI , and follow the same approach presented in the ∗ previous section by replacing W u with W u . To finalize the delivery rate assigned to each request, we use the following relationship between CSC delivery rate and session join time, assuming that the client uν makes a channel change request for Sessj : ( ∗ ) QB,ν − Tj,mcc Wj,mcc + WM − Wν,max Tν,j = Tj ∗ − (8) W ν + WM,j − Wν,max where j ∗ represents the start of delivery time corresponding to the earliest key-frame transmission (which the client can join) along Sessj ’s MCC stream(s), W ν represents the mean delivery rate used by the CSC server within the interval [Tν,j , Tj,mcc ], Tj,mcc represents the duration for the key-frame delivery period, and Wj,mcc represents the transmission rate used to deliver the key-frames along the given MCC stream. V. Discussions In this paper, we proposed a joint channel change framework for the IPTV networks, which can effectively support both unicast- and multicast-based delivery techniques. We started our research by first analyzing the potential drawbacks of using each of these delivery techniques separately. During our initial analysis, we observed significant performance degradations as we increased the number of active users. For the unicast-based approach, the admission rate for the requests to the channel change server decreased dramatically as we increased the number of users, whereas for the multicast-based approach, it was the perceived latency for the admitted requests that suffered dramatically. Based on these observations, and by keeping in mind the advantages and disadvantages for each of these scenarios, we 15 Since the buffers at the access point are assumed to be capable of handling the bursty nature of the IPTV traffic, limiting our observation timeframe to channel switching period is an acceptable assumption. Additionally, we limit our discussion on buffer requirements to channel switching packets only.

designed a joint channel change framework that combined the repetitive response of the multicast-based approach with the quick and directed response of the unicast-based approach. In doing so, we were able to address the latency concerns for the multicast-based approach, while also addressing the overhead concerns for the unicast-based approach. The most important feature of the proposed framework is its adaptiveness to the highly varying system constraints and network conditions. For that purpose, we dynamically distributed the available resources to unicast- and multicastbased approaches, based on the specific characteristics of each of these users. We also made use of the already available passive support provided by the access point buffers (which are used to regulate the bursty nature of the incoming traffic) to further improve the latency performance and increase the ratio of requests admitted to the channel change server. Our research is ongoing and, in this paper, we only give a detailed insight on how to optimally integrate unicastand multicast-based approaches within a single framework, in such a way that the resulting framework can overcome the weaknesses presented by either of these approaches in especially high-load scenarios. We are currently working on building an experimental model to analyze the performance of the proposed framework in detail, and we hope to present our results in an extended version of this paper. References [1] J. Asghar, I. Hood, and F. L. Faucheur, “Preserving video quality in IPTV networks,” IEEE Transactions on Broadcasting, vol. 55, no. 2, pp. 386–395, Jun 2009. [2] P. Siebert, T. N. M. V. Caenegem, and M. Wagner, “Analysis and improvements of zapping times in IPTV systems,” IEEE Transactions on Broadcasting, vol. 55, no. 2, pp. 407–418, Jun 2009. [3] Y. Bejerano and P. V. Koppol, “Improving zap response time for IPTV,” in IEEE INFOCOM’09, 2009, pp. 1971–1979. [4] A.C. Begen, N. Glazebrook, and W. Ver Steeg, “A unified approach for repairing packet loss and accelerating channel changes in multicast IPTV,” in IEEE Consumer Communications and Networking Conference ’09, 2009. [5] T. Qiu, Z. Ge, S. Lee, J. Wang, Q. Zhao, and J. Xu, “Modeling channel popularity dynamics in a large IPTV system,” in ACM SIGGMETRICS/Performance’09, 2009. [6] G. Yu, T. Westholm, M. Kihl, I. Sedano, A. Aurelius, C. Lagerstedt, and P. Odling, “Analysis and characterization of IPTV user behavior,” in IEEE International Symposium on Broadband Multimedia Systems and Broadcasting, BMSB ’09, 2009.