Efficient Multicast Packet Authentication - Semantic Scholar

1 downloads 0 Views 200KB Size Report
signed to authenticate real time multicast packet streams with a potentially ... known in advance to the source, such as a film or mu- sic. For such content, the ...
Efficient Multicast Packet Authentication Alain Pannetrat, Refik Molva Institut Eurécom {[email protected], [email protected]}

Abstract

needed multicast security mechanism. To allow packets to be authenticated in a stream, the source must add authentication information to the distributed content. This authentication information is used by recipients to ascertain the origin of the transmitted content. In the context of multicast authentication, we distinguish two types of distributed contents: pre-recorded and real time. Pre-recorded content describes content that is known in advance to the source, such as a film or music. For such content, the authentication information can be computed and inserted in the stream in advance. On the other hand, real time content describes content that is produced in real time such as live sports event broadcasting, news events or financial stock quotes. Real time content requires some of the authentication information to be computed in real time, which adds further constrains on the efficiency of the authentication algorithm. Thus, an efficient real time authentication algorithm can be used for pre-recorded data while the converse is not necessarily true. Moreover, it seems that real time application naturally have a stronger need for authentication. Consider as an example, the disastrous consequence that source impersonation could have for an application such as stock quote distribution, where a malicious entity could generate bogus financial data. The main goal of this work is to provide a multicast authentication with a emphasis on low communication overhead, for real time data applications where a low delay is acceptable and will not be perceived at the message level. For an approach directed more specifically to pre-recorded data, we refer the reader to [9], [15] and [29].

Providing authentication mechanisms for IP-Multicast streams is paramount for the development of large scale commercial multicast content delivery applications. This need is particularly strong for the delivery of real time content, such as live video/audio news events or financial stock quote distribution. However, this turns out to be a quite challenging problem for many reasons. First, the authentication of the multicast data must be verifiable by a potentially very large number of untrusted recipients. Second, since multicast communication protocols are almost always best effort, the authentication mechanisms needs to authenticate received content despite the potential loss of some packets. Finally, the authentication mechanism needs to be efficient enough to cope with real time data and should have a small communication overhead. We propose a new multicast authentication scheme designed to authenticate real time multicast packet streams with a potentially unlimited number of recipients. This scheme provides both integrity and nonrepudiation of origin, and in a majority of situations, it performs with less overhead in bytes per packet than previously proposed practical real time stream authentication schemes.

1 Introduction IP-Multicast [8] allows the scalable delivery of packets to a potentially unlimited number of recipients. As such, it is a very interesting mechanisms for commercial applications that deliver streamed content to a large group of recipients, such as video/audio broadcasting. However, some security issues need to be solved[12] before these application are deployed on a large scale. The most basic needed security mechanisms for large scale commercial multicast applications are confidentiality and authentication. In fact, the key distribution algorithms employed in many multicast confidentiality proposals[16, 30, 23, ...] require a form of authentication to assure that the keys originate from a legitimate key distribution entity. Consequently, we argue that authentication is probably the most

1.1 Two Levels of Authentication We distinguish two levels of authentication: Source Authentication: allows a recipient to verify the origin of the content. Nonrepudiation (of origin): allows the recipient to prove the origin of the data to a third party. In traditional two party communications, source authentication is provided with efficient symmetric techniques 1

using a MAC (Message Authentication Code) which relies on a secret key shared between the two communicating parties. On the other hand, nonrepudiation is provided with a digital signature, using asymmetric cryptographic techniques which have a cost that is several orders of magnitude higher than a MAC. Canetti et al. have proposed a multiparty extension[7] to MACs in the context of multicast, but their scheme has some drawbacks. Most notably, the communication overhead is important and the security of the scheme is only defined up to a coalition of malicious recipients forging data for a chosen recipient. Recent work from Boneh et al.[4] suggest more generally that extending symmetric MAC techniques in the multicast setting will not be possible without new advances in cryptography. As we will see, except TESLA[22], current practical multicast authentication techniques are not fully built on symmetric techniques but rely instead partially on asymmetric techniques. As a consequence many of these schemes, including ours, also provide nonrepudiation of origin.

1.2 Real Time Multicast Authentication Challenges There are two main factors which make multicast stream authentication a challenge: A Multiparty Factor: we have an unlimited number of untrusted recipients. A Streaming Factor: we want to authenticate data from a potentially infinite stream of packets transmitted over a lossy channel. The multiparty factor has a strong impact on the security requirements of a multicast authentication scheme. Indeed, a fundamental difference between multicast and two-party authentication is that in multicast we consider the recipients as potential adversaries. This rules out the use of a symmetric MAC key shared between the source and the recipients, because recipients should not be able to impersonate as the source of the stream. The streaming factor has several design implications. Firstly, we do not view the stream as a unique object that is authenticated all at once, but rather as a sequence of consecutive chunks of data that need to be authenticated individually as they are received. Secondly, recipients should be able to authenticate packets starting from an arbitrary point in the stream or at least on the boundary of a small block of packets. Multicast is often implemented over UDP and assumes only a best effort delivery mechanism and many multimedia multicast applications tolerate losses with a graceful degradation in playback quality. Consequently, one of the most important design requirement of a multicast authentication scheme is the ability to

authenticate packets amid losses in the network (for nonlossy streams, see for example [9]). From all the observation we made, we can establish several parameters to measure the quality of a real time stream authentication scheme: Robustness: the ability of the scheme to authenticate received data despite losses in the network. 

Joinability: the ability of recipients to start authenticating packets from an arbitrary point in the stream. 

(Server Side) Buffering: the maximum number of packets that need to be stored on the server to compute robust authentication information. 

(Authentication) Latency: the maximum number of additional packets that need to be received before a packet can be authenticated. 

Computational Cost: the computational cost of the scheme. 



Communication Overhead: the number of bytes per packets which describe the embed authentication information.

Buffering and Latency appear in some situations where authentication information pertaining to a packet is stored in one or several other packets. Ideally we would like a scheme that has perfect robustness, that is joinable on every packet, has no buffering or latency and has an overhead as well as a cost similar to what is found in a MAC scheme. In practice however, such a perfect scheme does not exist and a compromise needs to be found between these parameters.

1.3 Related Work A straightforward stream authentication method would be to use a public key signature on each packet of the stream. In theory, this is well suited for real time streams and the authentication is joinable on any packet. However, adding a typical 1024 bit signature[28] (or 128 bytes) to every packet represents a consequent overhead, moreover, the computational cost of a public key signature makes such a solution impractical in many scenarios. Consequently stream authentication proposals have taken two approaches, sometimes in combination: design more efficient signature schemes and amortize the cost of signatures over several packets. Faster digital signatures designed with stream authentication in mind where proposed by Rohatgi [27], as well as Wong and Lam [31]. These proposals come however with a communication overhead that makes them impractical

in many situations. The BiBa scheme proposed by Perrig [21] offers a significantly improved broadcast signature scheme which has a lower computationnal overhead but still a communication overhead that is only slightly lower than a traditionnal public key signature. On the other hand, these schemes including the one in [9], still have the advantage of offering a fully real time authentication (ie. with no delay at all). A complementary approach is to amortize the signature over several packets in a block. The stream is itself divided into many small blocks that have each a unique digital signature that is combined with hash/MAC techniques to authenticate the packets in the block. We refer to these techniques as well as the one we propose in this work as hybrid approaches. Wong and Lam proposed one of the first hybrid approaches in their hash tree construction[31], which is robust to any number of losses in a stream but has a consequent overhead per packet, even larger than the size of a digital signature. Instead of being robust to any type of packet loss, recent stream authentication proposals have been designed to adapt to loss patterns that are more specific to the Internet. This allows a significant gain in terms of overhead. First, based on the observation that losses usually occur in bursts in TCP/IP[20], Golle and Mogadugo[10] proposed a scheme that could tolerate (1 or several) bursty loss(es) of at most packets in a block. Packets are linked together in a “hash chain”, the last packet of which is digitally signed. However, the scheme has some drawbacks, and in particular, the transmission of the signature is not clearly addressed. Independently Perrig et al. proposed a more complex “hash chain” construction called EMSS[22] which is adapted to multiple losses and which better addresses signature transmission. Recently, in a scheme called SAIDA[19], which shares similarities with our work, Park et al. used the IDA (information dispersal algorithm) to transmit authentication information pertaining to each block of packets in a stream. We discuss these related proposals more in detail in section 5. As a complementary approach to their EMSS scheme Perrig et al.[22] proposed a very efficient time based stream authentication scheme called TESLA. It provides source authentication but does not offer nonrepudiation, which is not a problem for many applications. Its most interesting feature is that it tolerates arbitrary packet loss with a low overhead. Its main drawback is that it requires all the recipients to establish a loose clock synchronization with the source through a initial unicast exchange which may not be always practical in a large multicast group.

1.4 Overview of our Scheme Our scheme uses a combination of hash and signature techniques with FEC, or more precisely, erasure codes.

The two most employed techniques to achieve reliable delivery of packets in computer communication protocols are ARQ (Automatic Repeat reQuest) techniques and FEC (Forward Error Correction). ARQ techniques are used every day in Internet protocols such as TCP, while FEC techniques have long been confined to the telecommunications world. However, there has been recently a surge in interest for FEC techniques in the Internet world, often in combination with more traditional ARQ approaches[18, 6]. While in the telecommunications world FEC techniques are used most often to detect and correct errors occurring in the transmission of a stream of bits, they are used in the Internet world to recover from the loss of packet sized objects. Indeed, in the Internet world a packet is either received or lost. A packet can be considered lost if it does not arrive after a certain delay or perhaps if it has bad checksum. Our idea was first to use FEC to transmit the signature alone, but we soon realized that FEC could also be used as an alternative to hash trees[31] or chains[22, 10] to transmit authentication information, with lower overhead per packet in most cases than any other scheme suitable for real time broadcasts. The central contribution of this work is the proposal of a joinable real time robust stream authentication scheme with nonrepudiation of origin. It uses Erasure Codes to provide a lower overhead per packet than previous real time authentication stream proposals, while being adapted to realistic multicast Internet loss patterns. A brief overview of erasure codes will be presented in the next section. Our scheme is formalized in section 3 as well as its relationship with Internet loss patterns which are modeled with a Markov chain. Section 4 discusses the cost and overhead of our scheme and presents its use in a few concrete scenarios. Finally, we review other real time lossy stream authentications schemes in section 5 and compare them with our approach.

2

Background

2.1 Erasure Codes

  takes a set algorithm  ofgeneration  An 

erasure  code  source packets in a block and produces  code packets:

!" #!%$ '&() +*,-    / "!0 #!%$ '&()  is The main property of the set . that any subset to recover the  of  elements of . suffices algorithm 1 . source data with the help of a decoding  To be exact, the decoding algorithm 1 needs to know the  position, or index, of the received elements in . to re cover . This information can often be derived by other means (such as the packet sequence number) and we will assume in the remaining discussion that this information



is available implicitly to 1 . If the first  code are

!  #!    packets  where equal to the source packets, that is

!0"#! $  &( )  *  "    , we call the code systematic

"! $  &  ) #! $  &( )  are and the extra redundancy packets called parity packets. Systematic codes are very useful since they do not require any additional processing from the recipient in the case where no loss occurs. It is important to note that Erasure Codes are not used in the same context in the Internet as in telephony. Here the codes are not designed to recover damaged packets but rather the loss of full packets in a block of several packets. Intuitively, an individual packet can therefore be viewed more like a single code symbol rather than a set of symbols. For a good introduction to practical erasure codes we refer the reader to the work of L. Rizzo[26] where ReedSolomonerasure codes are described. These codes op  and erate in may not be efficient for large data blocks of packets (several hundred kilobytes). However, they are suitable in our scenario since we work on data units that are much smaller than a packet (typically 16 or 20 bytes), as shown below. For faster codes, we refer the reader to the work of M. Luby et al. on Tornado Codes [14, 6], where codes with near linear coding and decoding times are described.  "    will describe In the remaining of this work, a practical systematic erasure code generation algorithm which takes  source packets  and produces    code  

 # is the source data and . packets. If are  extra generated parity packets, we will write

  the .  *  "    . The   corresponding decoding algorithm will be denoted 1   and if describes the set of received and the source data, we will write  * 1  elements  to describe the recovery process.

2.2 Notations In this work we will consider a stream to be divided in consecutive blocks of packets. Since a stream does not necessarily exactly contain a number of packets which is an exact multiple of we allow the use of dummy padding packets at the very end of the stream to match a packet boundary. Our authentication scheme is parameterized by the block size in packets and      the maximum expected loss rate per block. We will denote  as a cryptographic hash function such as SHA[17] or MD5[25] which produces hashes of  bytes.  will The couple  denote the digital signature and verification algorithms respectively associated with the source of the packet stream, such as RSA[28, 1] for example. The size of the signatures will be expressed as  bytes. For RSA, a typical value for  is 128 bytes (or 1024 bits).

3

Stream Authentication

3.1 Authentication Tags

 

 . Consider as a sequence of packets  

   a block ! #" *  $ "  be the set of hash values Let  of these packets with a cryptographic hash function  . From set we build a set of authentication tags

!%   this &%   hash  *,+ with the following algorithm ')(  which uses some of the notations introduced in the previous section: ')(   *,+

      INPUT:

-% &%   OUTPUT:

.  +*,  /0* 21   3 *  $         * #

.  . +* 54  $ ,6 * )7 / * 21    3   3 where .

  into Split . . equal length tags !%  8%   . Tag generation:

(1) (2) (3) (4)

We propose a more visual representation of the tag algorithm on figure 1.  *,+ uses two different erasure codes, We observe that ')( 

  on line (3) is of in steps (1) and (3). The values . . total length that is a multiple of bytes, because we have :9  = @?A B C . This allows us to divide .  . into equal length tags on line (4). To exploit the tag generation algorithm we will first define our authentication criterion: Authentication criterion: In this work we say that a " packet  is fully authenticable in a block if, given the

 "     of packets set of hashes the block and their  D  D  *  , weincan signature 3  $     " . that both   3    D  D  *  FE HG)I and  $ "  verify The proposed schemes in this work are based on the following property of the tag generation algorithm.



 

Proposition 1.  itsa block

Let  J    "   *  2  " be of packets and *  !%  &%  associated hash set.  If we compute K    

        9 L  ;  , * +  then any subset of at least  >= 'B(   packets in J can be authenticated using any subset of at 9 L; M= tags in K . least 

N9 L;



 

 >= . Let JP O %!U   Q&R 8%-U Q>S- Proof. Define   R  3 SV  *be be a subset of  packets in J and let KTO  K a subset of packets in . We can compute  9  L; >= ele1 4  $ ,6 *)7 2KWO since W K O contains    Q R   Q SY  Q>Z *  $ Q>Z  be ments. Let X the hashes of the received packets. We can recover

       form D and  by computing      1  $ X   . Finally we can compute   3 '     

+,-+./+102+134+.56+174+.8 H

H

H

H

H

H

 to authenticate the received packets JTO to verify our authentication criterion. A direct corollary of the proposition above is that both a block of packets and their authentication tags can withstand a loss rate of at most ?0 ) C elements while allowing us to authenticate the remaining packets. Finally, from the construction of the algorithm above we can determine the size of an authentication tag:

+1&:9  +1&

H

H

Proposition 2. Let define the length of our cryptographic hashes and  the size of the signatures. The size of an individual authentication tag is expressed as a function    of both the number of packets in a block and the maximum expected loss rate per block, as follows:

F

F    H  G

G I

Erasure Code 1

 =@?ACB

    ;   ;   ;  0



!

 

and Proof. Let  denote the size of the value . . the size of  3 padded to the proper length, both on   ! . From line (3) of the algorithm. We have 2 !  the4 $ ,erasure on line (3) we have 6 4 $ * ) 6  7#*&