A Practical and Secure Communication Protocol in the Bounded Storage Model E. Sava¸s, B. Sunar Faculty of Engineering and Natural Sciences Sabanci University Istanbul, Turkey TR-34956 Electrical & Computer Engineering Worcester Polytechnic Institute Worcester, Massachusetts 01609 [email protected], [email protected]

Abstract. Proposed by Maurer the bounded storage model has received much academic attention in the recent years. Perhaps the main reason for this attention is that the model facilitates a unique private key encryption scheme called hyper-encryption which provides everlasting unconditional security. So far the work on the bounded storage model has been largely on the theoretical basis. In this paper, we make a first attempt to outline a secure communication protocol based on this model. We describe a protocol which defines means for successfully establishing and carrying out an encryption session and address potential problems such as protocol failures and attacks. Furthermore, we outline a novel method for authenticating and ensuring the integrity of a channel against errors. Key Words: Bounded storage model, hyper-encryption, information theoretical security, pervasive networks.

1

Introduction

Proposed by Maurer [1] the Bounded Storage Model (BSM) has recently received much attention. In this model each entity has a bounded storage capacity and there is a single source which generates a very fast stream of truly random bits. Whenever Alice and Bob want to communicate they simply tap into the random stream and collect a number of bits from a window of bits according to a short shared secret key. Once a sufficient number of random bits are accumulated on both sides, encryption is performed by simply using one time pad encryption. The ciphertext is sent after the window of bits has passed. What keeps an adversary from searching through the random bits is the limitation on the storage space. The model assumes that the adversary may not store the entire sampling window and therefore has only partial information of the sampled random stream. A major advance was achieved by Aumann, Ding and Rabin [2, 3] who showed that the BSM provides so-called “everlasting security”, which simply means that even if the key is compromised, any messages encrypted using this private key

prior to the compromise remain perfectly secure. Furthermore, it was also shown that even if the same key is used repeatedly, the encryption remains provably secure. These two powerful features of this model have attracted sustained academic [1–5] and popular interest [6, 7] despite the apparent practical difficulties. As promising as it seems there are serious practical problems preventing the BSM from immediate employment in real life communication: implementation of a publicly accessible high rate random source, authentication of the random stream generator, resilience to broadcast errors and noise, synchronization of communicating parties, implementation of the one-time pad generator. In this paper we make a first attempt to address these problems.

2

The Network Architecture and Related Assumptions

In this section, we briefly describe the network architecture and explain related assumptions. We envision a network consisting of many common nodes that desire to communicate securely using the random stream provided by a broadcast station. The parties and important entities in the network are described as follows: – Super-node: The super-node is capable of broadcasting a truly random sequence at an extremely high bit rate that is easily accessible by all common nodes. A typical example for a super-node is a satellite broadcasting such a sequence to its subscribers. – Random Stream: We refer to the broadcast of the super-node as the random stream. For the protocol in this paper it is sufficient to have a weak random source with sufficient min-entropy as described in [5]. We propose the super-node to transmit the random stream over multiple sub-channels with moderate to high bit rates. Note that an ultra high bit rate is unlikely to be sustained by a single channel and due to their constrained nature. Also at high transmission rates it would be difficult to keep the error rate sufficiently small. – Common Nodes: The common nodes represent the end users in the system that want to communicate securely with each other utilizing the service of the super-node. Compared to the super-node they are more constrained and only able to read at the speed of the sub-channels. We also assume that a common node can read any of the sub-channels at any time. Furthermore, during the generation of the one-time-pad (OTP), we allow our nodes to hop from one sub-channel to another. The hopping pattern is determined by the shared secret between the two nodes. Therefore, an adversary has to store all the data broadcast in all channels in order to mount a successful attack. – Adversary: An adversary is a party with malicious intent such as accessing confidential information, injecting false information into the network, disrupting the protocol etc. In our protocol the adversary is computationally unlimited (unlike in other security protocols) but is limited in storage capacity.

3

The Encryption Protocol

In this section, we briefly outline the steps in the proposed protocol. Basically, there are four steps the two parties have to take: 1. Setup: Initially the two communicating parties, i.e. Alice and Bob, have to agree on a shared secret which will be needed to derive the OTP from the random stream. For this, they can use a public or secret key based key agreement protocol or simply pre-distribute the secret keys. In the BSM predistributing keys makes perfect sense, since in the hyper-encryption scheme keys can be reused with only linear degradation in the security level. The two parties synchronize their clocks before they start searching for a synchronization pattern in the random stream. For this they can use a publicly-available time service with a good accuracy such as Network Time Protocol (NTP) or GPS Time Transfer Protocol [8]. 2. Communication request: Alice issues a communication request to Bob. The request includes Alice’s identity, the length of the message Alice will send, and the time Alice will start looking for the synchronization pattern. Alice may send the request over a public authenticated channel. 3. Synchronization: The nodes search for a fixed synchronization pattern in the random stream in order to make sure they start a sampling window at the same bit position in the random stream. Alice and Bob read a predetermined number of bits from the random stream once they are synchronized. They hash these bits and compare the hashes over an authenticated channel to make sure they are truly synchronized. 4. Generation of the OTP: Alice and Bob start sampling and extracting the OTP (see Section 6) from the random stream. The length of the stream they collect is the same as the total length of the messages they intend to exchange. The duration of the sampling is an important parameter in the system (window size) and determined by the storage capacity of the attacker.

4

Synchronization

We start our treatment by defining the following system parameters: – – – – – – – – –

T (bits/sec): Bit rate of random source. w (bits) : Maximum number of bits an adversary can afford to store. r (bits) : Key length. k (bits) : Length of synchronization pattern. m (bits): Maximum length of message. s: Number of sub-channels. γ (sec): Duration of a session. τ (sec) : Maximum delay between any two nodes in the network. (sec) : Bias in the random stream, i.e. the distribution of the random source is -close to uniform distribution. – δ (sec) : Maximum allowable difference between any two clocks in the system.

– θ (sec): Synchronization delay. – e : Error rate in the random stream. The synchronization of the sender and the receiver nodes is a serious problem that must be properly addressed as the random source rate is very high. Especially, when the random stream is broadcast as an unstructured stream of bits where there are no regular frames, special techniques must be used to allow two users to synchronize before the bit extraction phase may begin. Two communicating nodes must start sampling the broadcast channel exactly at the same time. Even if they start exactly at the same time, any drift between the clocks of two nodes or any discrepancy in the broadcast delays to the two nodes may result in synchronization loss. We propose a synchronization technique that is easily applicable in practical settings. In our analysis we do not consider the discrepancy in the broadcast delays, which can be easily taken into account by adding it to the clock drift parameter. The synchronization of the nodes is achieved through the use of a fixed binary sequence p ∈ {0, 1}k , called synchronization pattern. Since the super-node broadcasts a weakly random binary stream, we may conclude that the synchronization pattern p appears in the broadcast channel in a fixed time interval with probability proportional to the randomness in the stream. Two nodes that want to communicate securely start searching for p at an agreed time. The synchronization is established once they observed the pattern p for the first time after this agreed time. We expect the clocks of the two nodes to be synchronized before they start searching for the synchronization pattern. However, we cannot assume perfectly synchronized clocks and therefore there is a probability of the node with a faster clock finding a match while the other node is still waiting. We will identify the magnitude of the synchronization failure probability later in this section. It is important to provide some insight about how to choose the length of the synchronization pattern and to estimate the time needed for getting two parties synchronized. The two nodes are loosely synchronized. Thus the maximum clock difference between any two nodes cannot be larger than a pre-specified value δ. Our aim is to decrease the probability of the synchronization pattern appearing within the time period of δ seconds after the request is sent. The probability of a fixed pattern of length k appearing in a randomly picked bit string of length n (with n ≥ k) may be found by considering the n − k + 1 sub-windows of length k within the n bit string. The matching probability in a single sub-window is 1/2k . The probability of the pattern not being matched in any of the sub-windows is approximated as P ≈ (1 − 21k )n−k+1 . In the approximation we assumed that the sub-windows are independent which is not correct. However, the fact that the pattern did not appear in a sub-window eliminates only a few possible values in the adjacent (one bit shifted) sub-window. Hence the probability of the pattern not appearing in a window conditioned on the probability of the pattern not appearing in one of the previous windows is roughly approximated as (1 − 21k ). Any sub-channel broadcasts n = δT /s bits in δ seconds. Therefore, the probability of a synchronization failure (i.e. the synchronization pattern appears before

δ seconds after tj elapsed) occurring will be 1 − P = 1 − (1 −

1 δT /s−k+1 ) . 2k

(1)

This probability can be made arbitrarily small by choosing larger values for k. In practice, due to the 1/2k term inside the parenthesis a relatively small k will suffice to practically eliminate the failure probability. For large k using (1 −

1 n−k+1 n−k+1 ) ≈1− 2k 2k

we obtain the following approximation 1−P ≈ δT /s−k+1 . Note that for a nonuni2k form weak random source the unsuccessful synchronization probability will be even smaller. We also want to determine the synchronization delay for a uniformly distributed random source. Ideally one would determine the value of n for which the expected value of the number of matches becomes one to determine the synchronization delay. Unfortunately, the dependencies make it difficult to directly calculate the expected value. Instead we determine the value of n for which the matching probability is very high. This gives us a maximum on the synchronization delay. The probability of a pattern miss occurring in a window of n bits is η = (1 − 21k )n−k+1 ≈ 1 − n−k+1 . We obtain n = (1 − η)2k + k − 1. Ignoring k − 1 for 2k a uniform random source the average synchronization delay will be θ≈

(1 − η)2k · s . T

(2)

Obviously, we would like to keep k as short as possible to minimize the synchronization delay. On the other hand, we need to make it sufficiently large to reduce the synchronization failure probability. These two constraints will determine the actual length of the synchronization pattern.

5

Coping with Errors

The authenticated delivery of the random stream to the users is crucial since an adversary (or random errors) may corrupt the bits of the random stream and thus the extracted bits at both sides of the communication may differ. Nevertheless, it is possible to discard sampled bits if corrupted since the sampling step of the OTP generation method is flexible. It is sufficient to sample new bits. A corruption may be detected by a simple integrity check mechanism. The supernode and common nodes are assumed to share a secret key that can be used for the integrity check. The super-node computes a message digest of the random stream under the shared key and broadcasts it as a part of the random stream. The common nodes having the same secret key can easily check the integrity of the random stream they observed. Here it is important that the message digest

algorithm is sufficiently fast to match the rate of the random stream. Message authentication codes (MACs) [9, 10] allow for low power, low area, and high speed implementations and therefore are suitable for this application. Another difficulty that common nodes may face is to locate the message digest in the random stream when the random stream has no framed structure. This difficulty can easily be overcome by employing a technique that is similar to the method used for synchronization. The super-node searches for a pattern in the random stream. When it observes the first occurrence of the pattern it starts computing a running MAC of the random stream. It stops the computation when it observes the next occurrence of the pattern. The resulting digest is broadcast after the pattern. With the next occurrence of the pattern the super-node starts the subsequent message digest. Any common node can perform the integrity check of the random stream by calculating the same message digest between any two occurrences of the pattern in the random stream. In order to develop a proper integrity ensuring mechanism we start by assuming a uniformly distributed error of rate e. Then the probability of successfully collecting t bits from a window of w bits is Psuc = (1 − e)t . Since (1 − e) is less than one, even for small error rates the success probability decreases exponentially with increasing number of sampled bits, i.e. t. This poses a serious problem. To overcome this difficulty we develop a simple technique which checks the integrity of the random stream in sub-windows delimited by a pattern. If the integrity of a sub-window cannot be verified the entire sub-window is dropped and the bits collected from this sub-window are marked as corrupted. When the collection process is over the two parties exchange lists of corrupted bits. In this scheme the integrity of each sub-window is ensured separately. The probability of a sub-window of size nsw being received correctly is Psw = (1 − e)nsw . Note that for a j bit pattern we expect the sub-window size to be nsw = 2j . Hence, even for moderate length patterns we obtain a very large sub-window and therefore relatively small success probability. The pattern length should be selected so as to yield a high success probability for the given error rate of the channel. For e 1 and large nsw the success probability can be approximated as Psw ≈ 1 − nsw e. Since some of the sampled bits will be dropped, collecting t bits almost always yields less than t useful bits. Therefore, we employ an oversampling strategy. Assuming we sample a total of zt bits (where z > 1) from x sub-windows with a success probability of Psw we can expect on average to obtain (Psw x) zt x = ztPsw uncorrupted bits. We want ztPsw = t and therefore ts = t/Psw = t/(1 − nsw e) samples should be taken to achieve t uncorrupted bits on average.

6

The OTP Generator

In this section we outline the one-time-pad generator module which is the unit all common nodes posses and use to collect bits off the random stream. Since this unit will be used in common nodes, it is crucial that it is implemented in the most efficient manner. We follow the sample then extract strategy proposed by

Vadhan [5] since it provides the most promising implementation. For instance, the construction of a locally computable extractor makes perfect sense from an implementation point of view, as it minimizes the number of bits that need to be collected off the random stream. We follow the same strategy to construct an OTP generator. To sample the stream a number of random bits (which may be considered as part of the key) are needed. An efficient sampler may be constructed by using a random walk on an expander graph. One such explicit expander graph construction was given by Gabber and Galil [11]. Their construction builds an expander graph as a bipartite graph (U, V ) where the sets U , V contain an identical list of vertices labeled using elements of Zn × Zn . Each vertex in U is neighbor (has common edge) with exactly 5 vertices in V . A random walk on the graph is initiated after randomly picking a starting node. The following node is determined by randomly picking an edge to follow from the 5 possible edges. This leads to the next node which is now considering as the new starting node. The walk continues until sufficiently many nodes are visited and their indices recorded. In the Gabber and Galil [11] expander construction the edges are chosen by evaluating a randomly picked function from a set of five fixed functions. The input to each function is the label (∈ Zn × Zn ) of the current node and the output is the label of the chosen neighbor node. To make the selection, a linear number of random bits are needed. For the starting node 2 log n random bits are needed. For each following node log 5 random bits are needed to select a function. To create a set of t indices 2 log n + (t − 1) log 5 random bits are needed. Once t bits are sampled using the indices generated by the random walk, a strong randomness extractor is used to extract the one-time pad from the weak source. For this an extractor E(t, m, ) : {0, 1}t × {0, 1}r 7→ {0, 1}m is needed. Such an extractor extracts m -close to uniform bits from a t-bit input string of certain min-entropy. In Vadhan’s construction [5] the length of the tbit input string is related to the m-bit output string by t = m + log(1/). Here determines how close the one-time pad is to a random string and therefore determines the security level and must be chosen to be a very small quantity (e.g. = 2−128 ). To construct such an extractor one may utilize 2-wise independent hash functions families, a.k.a. universal hash families [12–14, 9]. It is shown in [15] that universal hash families provide close to ideal extractor constructions. An efficient construction based toeplitz matrices [16] is especially attractive for our application1 . The hash family is defined in terms of a matrix vector product over the binary field GF (2). A toeplitz matrix of size m × t is filled with random bits. This requires only t + m − 1 random bits since the first row and the first column define a toeplitz matrix. The input to the hash function are the t bits which are assembled into a column vector and multiplied with the matrix to obtain the output of the hash function in a column vector containing the m output bits. In our setting we first want to create indices i ∈ [1, w] which will be used to sample bits from a window of size w. In the expander construction 1

A variation to this theme based on LFSRs [13] may also be of interest for practical applications.

√ the number of vertices is determined as n = w. To create a set of t indices 2 log n + (t − 1) log 5 random bits are needed. For the extractor another set of t + m − 1 = 2m + log(1/) − 1 random bits are required. To generate m output bits which are -close to the uniform distribution it is sufficient to use a total of log w + 4 log(1/) + 5m

(3)

random bits.

7

An Example

In this section we come up with actual parameter values in order to see whether realistic implementation parameters can be achieved. The most stringent assumption the BSM makes is that the attacker has limited storage space and therefore may collect only a subset of bits from a transmission channel when the data transmission rate is sufficiently high. We assume an attacker is financially limited by $100 million. Including the cost for the hardware and engineering cost to extend the storage with additional hardware to read the channel we roughly estimate the cost to be $10 per Gigabyte which yields a storage bound (and window size) of w = 107 Gigabytes. The window size is fixed and is determined by the adversaries’ storage capacity. However, by spreading a window over time it is possible to reduce the transmission rate. But this also means that two nodes have to wait longer before they can establish a new session. We choose a transmission rate of T = 10 Terabits/sec which we believe is practical considering that this rate will be achieved collectively by many sub-channels, e.g. s = 100 sub-channels. Under this assumption a session (or sampling window) lasts γ = w/T = 8000 seconds or 2 hours 13 minutes. This represents the duration Alice has to wait after synchronization before she can transmit the ciphertext. If this latency is not acceptable it is possible to decrease it by increasing the transmission rate. Another alternative is to precompute the OTP in advance of time and store it until needed. Furthermore, assume that the clock discrepancy between any two nodes is δ = 1µs. This is a realistic assumption since clock synchronization methods such as the GPS time transfer can provide sub-microsecond accuracy [8]. For this value we want to avoid synchronization failure in a window of size n = δT /s = 105 bits. For k ≥ 32 the failure probability is found to be less than 0.01% using equation (1). Picking k = 32 and η = 0.01 and using equation (2) we determine the synchronization delay as θ = 43 ms. For a window size of w = 107 Gigabytes and = 2−128 using equation (3) we determine the key length as r = 1080 + 5m where m is the message length. We summarize the results as follows. We also must to take the errors into account. We assume an error rate of e = 10−6 . First we want to maximize the probability of a successful reception of the synchronization pattern. For k = 32 the success probability becomes P = (1−e)k = 0.99996. To delimit sub-windows we choose a fixed pattern of length l = 16. The size of a sub-window is determined as nsw = 216 = 65, 536. This gives us a probability l of receiving a sub-window correctly Psw = (1 − e)2 = 0.936. The oversampling rate is found as 1/Psw = 1.068. Hence, it is sufficient to oversample only by 7%.

Storage bound Transmission rate Session length Key length Clock precision Synch. failure rate Synch. delay

8

104 Terabytes 10 Terabits/sec 2 hours 13 minutes 1080 + 5m bits 1 µsec less than 0.01 % 43 msec

Attacks and Failures

In this section we outline some of the shortcomings of the proposed protocol: 1. Overwriting (jamming) of the random stream. The adversary may attack the channel by overwriting the public random source with a predictable and known stream generated using a pseudo random number generator. Since the adversary may now generate the stream from the short key the storage bound is invalidated and the unconditional security property is lost. The prevention of this kind of attack is difficult but it is possible for users to detect jamming attacks by making use of a MAC. For this it is necessary that the users and the super-node share a secret key. The MAC is regularly computed and broadcast to the users as described earlier. For the authentication of the channel the users have to carry the burden of computing a MAC whenever they want to communicate. The MAC computation takes place concurrently with OTP generation. Since the super-node does not know which bits will be sampled by the users, the entire broadcast window needs to be authenticated. 2. Denial of service (DOS) attacks on the nodes. The attacker may overwhelm nodes with transmission and synchronization requests. Since the request is sent over an authenticated channel this will not cause the nodes to generate OTPs. However, the nodes may be overwhelmed from continuously trying to authenticate the source of the request. The protocol is not protected against DOS attacks. 3. Loss of synchronization. Due to clock drift it is possible that synchronization is lost. In this case, the nodes will have to resynchronize and retransmit all information. 4. Transmission Errors: Our protocol handles transmission errors by implementing an integrity check on sub-windows of the random stream. The described method works only for moderate to very low error rates. For instance, for an error rate higher than e = 10−3 the sub-window size shrinks below a reasonable size to provide a high success rate. The error rate may be reduced by employing error detection/correction schemes.

9

Conclusion

We made a first attempt in constructing a secure communication protocol based on the bounded storage model which facilitates a unique encryption method

that provides unconditional and everlasting security. We described a protocol which defines means for successfully establishing and carrying out an encryption session. In particular, we described novel methods for synchronization, handling transmission errors and OTP generation. We showed that such a communication protocol is indeed feasible by providing realistic values for system parameters.

References 1. Maurer, U.: Conditionally-perfect secrecy and a provably-secure randomized cipher. Journal of Cryptology 1 (1992) 53–66 2. Ding, Y.Z., Rabin, M.O.: Hyper-encryption and everlasting security (extended abstract). In: STACS 2003 – 19th Annual Symposium on Theoretical Aspect of Computer Science. Volume 2285., Springer Verlag (2002) 1–26 3. Aumann, Y., Ding, Y.Z., Rabin, M.O.: Everlasting security in the bounded storage model. IEEE Transactions on Information Theory 6 (2002) 1668–1680 4. Lu, C.J.: Hyper-encryption against space-bounded adversaries from on-line strong extractors. In Yung, M., ed.: Advances in Cryptology — CRYPTO 2002. Volume 2442., Springer-Verlag (2002) 257–271 5. Vadhan, S.: On Constructing Locally Computable Extractors and Cryptosystems in the Bounded Storage Model. In Boneh, D., ed.: Advances in Cryptology — CRYPTO 2003. Volume 2729., Springer-Verlag (2003) 61–77 6. Kolata, G.: The Key Vanishes: Scientist Outlines Unbreakable Code. New York Times (2001) 7. Cromie, W.J.: Code conquers computer snoops: Offers promise of ‘everlasting’ security for senders. Harvard University Gazette (2001) 8. Observatory, U.S.N.: GPS timing data & information. http://tycho.usno.navy.mil/gps datafiles.html (2004) 9. Halevi, S., Krawczyk, H.: MMH: Software message authentication in the gbit/second rates. In: 4th Workshop on Fast Software Encryption. Volume 1267 of Lecture Notes in Computer Science., Springer (1997) 172–189 10. Black, J., Halevi, S., Krawczyk, H., Krovetz, T., Rogaway, P.: UMAC: Fast and secure message authentication. In: Advances in Cryptology - CRYPTO ’99. Volume 1666 of Lecture Notes in Computer Science., Springer-Verlag (1999) 216–233 11. Gabber, O., Galil, Z.: Explicit constructions of linear-sized superconcentrators. Journal of Computer and System Sciences 3 (1981) 407–420 12. Carter, J.L., Wegman, M.: Universal classes of hash functions. Journal of Computer and System Sciences 18 (1978) 143–154 13. Krawczyk, H.: LFSR-based hashing and authentication. In: Advances in Cryptology - CRYPTO ’94. Volume 839 of Lecture Notes in Computer Science., SpringerVerlag (1994) 129–139 14. Rogaway, P.: Bucket hashing and its applications to fast message authentication. In: Advances in Cryptology - CRYPTO ’95. Volume 963 of Lecture Notes in Computer Science., New York, Springer-Verlag (1995) 313–328 15. Barak, B., Shaltiel, R., Tomer, E.: True Random Number Generators Secure in a Changing Environment. In C ¸ etin K. Ko¸c, Paar, C., eds.: Workshop on Cryptographic Hardware and Embedded Systems — CHES 2003, Berlin, Germany, Springer-Verlag (2003) 166–180 16. Mansour, Y., Nissan, N., Tiwari, P.: The computational complexity of universal hashing. In: 22nd Annual ACM Symposium on Theory of Computing, ACM Press (1990) 235–243

Abstract. Proposed by Maurer the bounded storage model has received much academic attention in the recent years. Perhaps the main reason for this attention is that the model facilitates a unique private key encryption scheme called hyper-encryption which provides everlasting unconditional security. So far the work on the bounded storage model has been largely on the theoretical basis. In this paper, we make a first attempt to outline a secure communication protocol based on this model. We describe a protocol which defines means for successfully establishing and carrying out an encryption session and address potential problems such as protocol failures and attacks. Furthermore, we outline a novel method for authenticating and ensuring the integrity of a channel against errors. Key Words: Bounded storage model, hyper-encryption, information theoretical security, pervasive networks.

1

Introduction

Proposed by Maurer [1] the Bounded Storage Model (BSM) has recently received much attention. In this model each entity has a bounded storage capacity and there is a single source which generates a very fast stream of truly random bits. Whenever Alice and Bob want to communicate they simply tap into the random stream and collect a number of bits from a window of bits according to a short shared secret key. Once a sufficient number of random bits are accumulated on both sides, encryption is performed by simply using one time pad encryption. The ciphertext is sent after the window of bits has passed. What keeps an adversary from searching through the random bits is the limitation on the storage space. The model assumes that the adversary may not store the entire sampling window and therefore has only partial information of the sampled random stream. A major advance was achieved by Aumann, Ding and Rabin [2, 3] who showed that the BSM provides so-called “everlasting security”, which simply means that even if the key is compromised, any messages encrypted using this private key

prior to the compromise remain perfectly secure. Furthermore, it was also shown that even if the same key is used repeatedly, the encryption remains provably secure. These two powerful features of this model have attracted sustained academic [1–5] and popular interest [6, 7] despite the apparent practical difficulties. As promising as it seems there are serious practical problems preventing the BSM from immediate employment in real life communication: implementation of a publicly accessible high rate random source, authentication of the random stream generator, resilience to broadcast errors and noise, synchronization of communicating parties, implementation of the one-time pad generator. In this paper we make a first attempt to address these problems.

2

The Network Architecture and Related Assumptions

In this section, we briefly describe the network architecture and explain related assumptions. We envision a network consisting of many common nodes that desire to communicate securely using the random stream provided by a broadcast station. The parties and important entities in the network are described as follows: – Super-node: The super-node is capable of broadcasting a truly random sequence at an extremely high bit rate that is easily accessible by all common nodes. A typical example for a super-node is a satellite broadcasting such a sequence to its subscribers. – Random Stream: We refer to the broadcast of the super-node as the random stream. For the protocol in this paper it is sufficient to have a weak random source with sufficient min-entropy as described in [5]. We propose the super-node to transmit the random stream over multiple sub-channels with moderate to high bit rates. Note that an ultra high bit rate is unlikely to be sustained by a single channel and due to their constrained nature. Also at high transmission rates it would be difficult to keep the error rate sufficiently small. – Common Nodes: The common nodes represent the end users in the system that want to communicate securely with each other utilizing the service of the super-node. Compared to the super-node they are more constrained and only able to read at the speed of the sub-channels. We also assume that a common node can read any of the sub-channels at any time. Furthermore, during the generation of the one-time-pad (OTP), we allow our nodes to hop from one sub-channel to another. The hopping pattern is determined by the shared secret between the two nodes. Therefore, an adversary has to store all the data broadcast in all channels in order to mount a successful attack. – Adversary: An adversary is a party with malicious intent such as accessing confidential information, injecting false information into the network, disrupting the protocol etc. In our protocol the adversary is computationally unlimited (unlike in other security protocols) but is limited in storage capacity.

3

The Encryption Protocol

In this section, we briefly outline the steps in the proposed protocol. Basically, there are four steps the two parties have to take: 1. Setup: Initially the two communicating parties, i.e. Alice and Bob, have to agree on a shared secret which will be needed to derive the OTP from the random stream. For this, they can use a public or secret key based key agreement protocol or simply pre-distribute the secret keys. In the BSM predistributing keys makes perfect sense, since in the hyper-encryption scheme keys can be reused with only linear degradation in the security level. The two parties synchronize their clocks before they start searching for a synchronization pattern in the random stream. For this they can use a publicly-available time service with a good accuracy such as Network Time Protocol (NTP) or GPS Time Transfer Protocol [8]. 2. Communication request: Alice issues a communication request to Bob. The request includes Alice’s identity, the length of the message Alice will send, and the time Alice will start looking for the synchronization pattern. Alice may send the request over a public authenticated channel. 3. Synchronization: The nodes search for a fixed synchronization pattern in the random stream in order to make sure they start a sampling window at the same bit position in the random stream. Alice and Bob read a predetermined number of bits from the random stream once they are synchronized. They hash these bits and compare the hashes over an authenticated channel to make sure they are truly synchronized. 4. Generation of the OTP: Alice and Bob start sampling and extracting the OTP (see Section 6) from the random stream. The length of the stream they collect is the same as the total length of the messages they intend to exchange. The duration of the sampling is an important parameter in the system (window size) and determined by the storage capacity of the attacker.

4

Synchronization

We start our treatment by defining the following system parameters: – – – – – – – – –

T (bits/sec): Bit rate of random source. w (bits) : Maximum number of bits an adversary can afford to store. r (bits) : Key length. k (bits) : Length of synchronization pattern. m (bits): Maximum length of message. s: Number of sub-channels. γ (sec): Duration of a session. τ (sec) : Maximum delay between any two nodes in the network. (sec) : Bias in the random stream, i.e. the distribution of the random source is -close to uniform distribution. – δ (sec) : Maximum allowable difference between any two clocks in the system.

– θ (sec): Synchronization delay. – e : Error rate in the random stream. The synchronization of the sender and the receiver nodes is a serious problem that must be properly addressed as the random source rate is very high. Especially, when the random stream is broadcast as an unstructured stream of bits where there are no regular frames, special techniques must be used to allow two users to synchronize before the bit extraction phase may begin. Two communicating nodes must start sampling the broadcast channel exactly at the same time. Even if they start exactly at the same time, any drift between the clocks of two nodes or any discrepancy in the broadcast delays to the two nodes may result in synchronization loss. We propose a synchronization technique that is easily applicable in practical settings. In our analysis we do not consider the discrepancy in the broadcast delays, which can be easily taken into account by adding it to the clock drift parameter. The synchronization of the nodes is achieved through the use of a fixed binary sequence p ∈ {0, 1}k , called synchronization pattern. Since the super-node broadcasts a weakly random binary stream, we may conclude that the synchronization pattern p appears in the broadcast channel in a fixed time interval with probability proportional to the randomness in the stream. Two nodes that want to communicate securely start searching for p at an agreed time. The synchronization is established once they observed the pattern p for the first time after this agreed time. We expect the clocks of the two nodes to be synchronized before they start searching for the synchronization pattern. However, we cannot assume perfectly synchronized clocks and therefore there is a probability of the node with a faster clock finding a match while the other node is still waiting. We will identify the magnitude of the synchronization failure probability later in this section. It is important to provide some insight about how to choose the length of the synchronization pattern and to estimate the time needed for getting two parties synchronized. The two nodes are loosely synchronized. Thus the maximum clock difference between any two nodes cannot be larger than a pre-specified value δ. Our aim is to decrease the probability of the synchronization pattern appearing within the time period of δ seconds after the request is sent. The probability of a fixed pattern of length k appearing in a randomly picked bit string of length n (with n ≥ k) may be found by considering the n − k + 1 sub-windows of length k within the n bit string. The matching probability in a single sub-window is 1/2k . The probability of the pattern not being matched in any of the sub-windows is approximated as P ≈ (1 − 21k )n−k+1 . In the approximation we assumed that the sub-windows are independent which is not correct. However, the fact that the pattern did not appear in a sub-window eliminates only a few possible values in the adjacent (one bit shifted) sub-window. Hence the probability of the pattern not appearing in a window conditioned on the probability of the pattern not appearing in one of the previous windows is roughly approximated as (1 − 21k ). Any sub-channel broadcasts n = δT /s bits in δ seconds. Therefore, the probability of a synchronization failure (i.e. the synchronization pattern appears before

δ seconds after tj elapsed) occurring will be 1 − P = 1 − (1 −

1 δT /s−k+1 ) . 2k

(1)

This probability can be made arbitrarily small by choosing larger values for k. In practice, due to the 1/2k term inside the parenthesis a relatively small k will suffice to practically eliminate the failure probability. For large k using (1 −

1 n−k+1 n−k+1 ) ≈1− 2k 2k

we obtain the following approximation 1−P ≈ δT /s−k+1 . Note that for a nonuni2k form weak random source the unsuccessful synchronization probability will be even smaller. We also want to determine the synchronization delay for a uniformly distributed random source. Ideally one would determine the value of n for which the expected value of the number of matches becomes one to determine the synchronization delay. Unfortunately, the dependencies make it difficult to directly calculate the expected value. Instead we determine the value of n for which the matching probability is very high. This gives us a maximum on the synchronization delay. The probability of a pattern miss occurring in a window of n bits is η = (1 − 21k )n−k+1 ≈ 1 − n−k+1 . We obtain n = (1 − η)2k + k − 1. Ignoring k − 1 for 2k a uniform random source the average synchronization delay will be θ≈

(1 − η)2k · s . T

(2)

Obviously, we would like to keep k as short as possible to minimize the synchronization delay. On the other hand, we need to make it sufficiently large to reduce the synchronization failure probability. These two constraints will determine the actual length of the synchronization pattern.

5

Coping with Errors

The authenticated delivery of the random stream to the users is crucial since an adversary (or random errors) may corrupt the bits of the random stream and thus the extracted bits at both sides of the communication may differ. Nevertheless, it is possible to discard sampled bits if corrupted since the sampling step of the OTP generation method is flexible. It is sufficient to sample new bits. A corruption may be detected by a simple integrity check mechanism. The supernode and common nodes are assumed to share a secret key that can be used for the integrity check. The super-node computes a message digest of the random stream under the shared key and broadcasts it as a part of the random stream. The common nodes having the same secret key can easily check the integrity of the random stream they observed. Here it is important that the message digest

algorithm is sufficiently fast to match the rate of the random stream. Message authentication codes (MACs) [9, 10] allow for low power, low area, and high speed implementations and therefore are suitable for this application. Another difficulty that common nodes may face is to locate the message digest in the random stream when the random stream has no framed structure. This difficulty can easily be overcome by employing a technique that is similar to the method used for synchronization. The super-node searches for a pattern in the random stream. When it observes the first occurrence of the pattern it starts computing a running MAC of the random stream. It stops the computation when it observes the next occurrence of the pattern. The resulting digest is broadcast after the pattern. With the next occurrence of the pattern the super-node starts the subsequent message digest. Any common node can perform the integrity check of the random stream by calculating the same message digest between any two occurrences of the pattern in the random stream. In order to develop a proper integrity ensuring mechanism we start by assuming a uniformly distributed error of rate e. Then the probability of successfully collecting t bits from a window of w bits is Psuc = (1 − e)t . Since (1 − e) is less than one, even for small error rates the success probability decreases exponentially with increasing number of sampled bits, i.e. t. This poses a serious problem. To overcome this difficulty we develop a simple technique which checks the integrity of the random stream in sub-windows delimited by a pattern. If the integrity of a sub-window cannot be verified the entire sub-window is dropped and the bits collected from this sub-window are marked as corrupted. When the collection process is over the two parties exchange lists of corrupted bits. In this scheme the integrity of each sub-window is ensured separately. The probability of a sub-window of size nsw being received correctly is Psw = (1 − e)nsw . Note that for a j bit pattern we expect the sub-window size to be nsw = 2j . Hence, even for moderate length patterns we obtain a very large sub-window and therefore relatively small success probability. The pattern length should be selected so as to yield a high success probability for the given error rate of the channel. For e 1 and large nsw the success probability can be approximated as Psw ≈ 1 − nsw e. Since some of the sampled bits will be dropped, collecting t bits almost always yields less than t useful bits. Therefore, we employ an oversampling strategy. Assuming we sample a total of zt bits (where z > 1) from x sub-windows with a success probability of Psw we can expect on average to obtain (Psw x) zt x = ztPsw uncorrupted bits. We want ztPsw = t and therefore ts = t/Psw = t/(1 − nsw e) samples should be taken to achieve t uncorrupted bits on average.

6

The OTP Generator

In this section we outline the one-time-pad generator module which is the unit all common nodes posses and use to collect bits off the random stream. Since this unit will be used in common nodes, it is crucial that it is implemented in the most efficient manner. We follow the sample then extract strategy proposed by

Vadhan [5] since it provides the most promising implementation. For instance, the construction of a locally computable extractor makes perfect sense from an implementation point of view, as it minimizes the number of bits that need to be collected off the random stream. We follow the same strategy to construct an OTP generator. To sample the stream a number of random bits (which may be considered as part of the key) are needed. An efficient sampler may be constructed by using a random walk on an expander graph. One such explicit expander graph construction was given by Gabber and Galil [11]. Their construction builds an expander graph as a bipartite graph (U, V ) where the sets U , V contain an identical list of vertices labeled using elements of Zn × Zn . Each vertex in U is neighbor (has common edge) with exactly 5 vertices in V . A random walk on the graph is initiated after randomly picking a starting node. The following node is determined by randomly picking an edge to follow from the 5 possible edges. This leads to the next node which is now considering as the new starting node. The walk continues until sufficiently many nodes are visited and their indices recorded. In the Gabber and Galil [11] expander construction the edges are chosen by evaluating a randomly picked function from a set of five fixed functions. The input to each function is the label (∈ Zn × Zn ) of the current node and the output is the label of the chosen neighbor node. To make the selection, a linear number of random bits are needed. For the starting node 2 log n random bits are needed. For each following node log 5 random bits are needed to select a function. To create a set of t indices 2 log n + (t − 1) log 5 random bits are needed. Once t bits are sampled using the indices generated by the random walk, a strong randomness extractor is used to extract the one-time pad from the weak source. For this an extractor E(t, m, ) : {0, 1}t × {0, 1}r 7→ {0, 1}m is needed. Such an extractor extracts m -close to uniform bits from a t-bit input string of certain min-entropy. In Vadhan’s construction [5] the length of the tbit input string is related to the m-bit output string by t = m + log(1/). Here determines how close the one-time pad is to a random string and therefore determines the security level and must be chosen to be a very small quantity (e.g. = 2−128 ). To construct such an extractor one may utilize 2-wise independent hash functions families, a.k.a. universal hash families [12–14, 9]. It is shown in [15] that universal hash families provide close to ideal extractor constructions. An efficient construction based toeplitz matrices [16] is especially attractive for our application1 . The hash family is defined in terms of a matrix vector product over the binary field GF (2). A toeplitz matrix of size m × t is filled with random bits. This requires only t + m − 1 random bits since the first row and the first column define a toeplitz matrix. The input to the hash function are the t bits which are assembled into a column vector and multiplied with the matrix to obtain the output of the hash function in a column vector containing the m output bits. In our setting we first want to create indices i ∈ [1, w] which will be used to sample bits from a window of size w. In the expander construction 1

A variation to this theme based on LFSRs [13] may also be of interest for practical applications.

√ the number of vertices is determined as n = w. To create a set of t indices 2 log n + (t − 1) log 5 random bits are needed. For the extractor another set of t + m − 1 = 2m + log(1/) − 1 random bits are required. To generate m output bits which are -close to the uniform distribution it is sufficient to use a total of log w + 4 log(1/) + 5m

(3)

random bits.

7

An Example

In this section we come up with actual parameter values in order to see whether realistic implementation parameters can be achieved. The most stringent assumption the BSM makes is that the attacker has limited storage space and therefore may collect only a subset of bits from a transmission channel when the data transmission rate is sufficiently high. We assume an attacker is financially limited by $100 million. Including the cost for the hardware and engineering cost to extend the storage with additional hardware to read the channel we roughly estimate the cost to be $10 per Gigabyte which yields a storage bound (and window size) of w = 107 Gigabytes. The window size is fixed and is determined by the adversaries’ storage capacity. However, by spreading a window over time it is possible to reduce the transmission rate. But this also means that two nodes have to wait longer before they can establish a new session. We choose a transmission rate of T = 10 Terabits/sec which we believe is practical considering that this rate will be achieved collectively by many sub-channels, e.g. s = 100 sub-channels. Under this assumption a session (or sampling window) lasts γ = w/T = 8000 seconds or 2 hours 13 minutes. This represents the duration Alice has to wait after synchronization before she can transmit the ciphertext. If this latency is not acceptable it is possible to decrease it by increasing the transmission rate. Another alternative is to precompute the OTP in advance of time and store it until needed. Furthermore, assume that the clock discrepancy between any two nodes is δ = 1µs. This is a realistic assumption since clock synchronization methods such as the GPS time transfer can provide sub-microsecond accuracy [8]. For this value we want to avoid synchronization failure in a window of size n = δT /s = 105 bits. For k ≥ 32 the failure probability is found to be less than 0.01% using equation (1). Picking k = 32 and η = 0.01 and using equation (2) we determine the synchronization delay as θ = 43 ms. For a window size of w = 107 Gigabytes and = 2−128 using equation (3) we determine the key length as r = 1080 + 5m where m is the message length. We summarize the results as follows. We also must to take the errors into account. We assume an error rate of e = 10−6 . First we want to maximize the probability of a successful reception of the synchronization pattern. For k = 32 the success probability becomes P = (1−e)k = 0.99996. To delimit sub-windows we choose a fixed pattern of length l = 16. The size of a sub-window is determined as nsw = 216 = 65, 536. This gives us a probability l of receiving a sub-window correctly Psw = (1 − e)2 = 0.936. The oversampling rate is found as 1/Psw = 1.068. Hence, it is sufficient to oversample only by 7%.

Storage bound Transmission rate Session length Key length Clock precision Synch. failure rate Synch. delay

8

104 Terabytes 10 Terabits/sec 2 hours 13 minutes 1080 + 5m bits 1 µsec less than 0.01 % 43 msec

Attacks and Failures

In this section we outline some of the shortcomings of the proposed protocol: 1. Overwriting (jamming) of the random stream. The adversary may attack the channel by overwriting the public random source with a predictable and known stream generated using a pseudo random number generator. Since the adversary may now generate the stream from the short key the storage bound is invalidated and the unconditional security property is lost. The prevention of this kind of attack is difficult but it is possible for users to detect jamming attacks by making use of a MAC. For this it is necessary that the users and the super-node share a secret key. The MAC is regularly computed and broadcast to the users as described earlier. For the authentication of the channel the users have to carry the burden of computing a MAC whenever they want to communicate. The MAC computation takes place concurrently with OTP generation. Since the super-node does not know which bits will be sampled by the users, the entire broadcast window needs to be authenticated. 2. Denial of service (DOS) attacks on the nodes. The attacker may overwhelm nodes with transmission and synchronization requests. Since the request is sent over an authenticated channel this will not cause the nodes to generate OTPs. However, the nodes may be overwhelmed from continuously trying to authenticate the source of the request. The protocol is not protected against DOS attacks. 3. Loss of synchronization. Due to clock drift it is possible that synchronization is lost. In this case, the nodes will have to resynchronize and retransmit all information. 4. Transmission Errors: Our protocol handles transmission errors by implementing an integrity check on sub-windows of the random stream. The described method works only for moderate to very low error rates. For instance, for an error rate higher than e = 10−3 the sub-window size shrinks below a reasonable size to provide a high success rate. The error rate may be reduced by employing error detection/correction schemes.

9

Conclusion

We made a first attempt in constructing a secure communication protocol based on the bounded storage model which facilitates a unique encryption method

that provides unconditional and everlasting security. We described a protocol which defines means for successfully establishing and carrying out an encryption session. In particular, we described novel methods for synchronization, handling transmission errors and OTP generation. We showed that such a communication protocol is indeed feasible by providing realistic values for system parameters.

References 1. Maurer, U.: Conditionally-perfect secrecy and a provably-secure randomized cipher. Journal of Cryptology 1 (1992) 53–66 2. Ding, Y.Z., Rabin, M.O.: Hyper-encryption and everlasting security (extended abstract). In: STACS 2003 – 19th Annual Symposium on Theoretical Aspect of Computer Science. Volume 2285., Springer Verlag (2002) 1–26 3. Aumann, Y., Ding, Y.Z., Rabin, M.O.: Everlasting security in the bounded storage model. IEEE Transactions on Information Theory 6 (2002) 1668–1680 4. Lu, C.J.: Hyper-encryption against space-bounded adversaries from on-line strong extractors. In Yung, M., ed.: Advances in Cryptology — CRYPTO 2002. Volume 2442., Springer-Verlag (2002) 257–271 5. Vadhan, S.: On Constructing Locally Computable Extractors and Cryptosystems in the Bounded Storage Model. In Boneh, D., ed.: Advances in Cryptology — CRYPTO 2003. Volume 2729., Springer-Verlag (2003) 61–77 6. Kolata, G.: The Key Vanishes: Scientist Outlines Unbreakable Code. New York Times (2001) 7. Cromie, W.J.: Code conquers computer snoops: Offers promise of ‘everlasting’ security for senders. Harvard University Gazette (2001) 8. Observatory, U.S.N.: GPS timing data & information. http://tycho.usno.navy.mil/gps datafiles.html (2004) 9. Halevi, S., Krawczyk, H.: MMH: Software message authentication in the gbit/second rates. In: 4th Workshop on Fast Software Encryption. Volume 1267 of Lecture Notes in Computer Science., Springer (1997) 172–189 10. Black, J., Halevi, S., Krawczyk, H., Krovetz, T., Rogaway, P.: UMAC: Fast and secure message authentication. In: Advances in Cryptology - CRYPTO ’99. Volume 1666 of Lecture Notes in Computer Science., Springer-Verlag (1999) 216–233 11. Gabber, O., Galil, Z.: Explicit constructions of linear-sized superconcentrators. Journal of Computer and System Sciences 3 (1981) 407–420 12. Carter, J.L., Wegman, M.: Universal classes of hash functions. Journal of Computer and System Sciences 18 (1978) 143–154 13. Krawczyk, H.: LFSR-based hashing and authentication. In: Advances in Cryptology - CRYPTO ’94. Volume 839 of Lecture Notes in Computer Science., SpringerVerlag (1994) 129–139 14. Rogaway, P.: Bucket hashing and its applications to fast message authentication. In: Advances in Cryptology - CRYPTO ’95. Volume 963 of Lecture Notes in Computer Science., New York, Springer-Verlag (1995) 313–328 15. Barak, B., Shaltiel, R., Tomer, E.: True Random Number Generators Secure in a Changing Environment. In C ¸ etin K. Ko¸c, Paar, C., eds.: Workshop on Cryptographic Hardware and Embedded Systems — CHES 2003, Berlin, Germany, Springer-Verlag (2003) 166–180 16. Mansour, Y., Nissan, N., Tiwari, P.: The computational complexity of universal hashing. In: 22nd Annual ACM Symposium on Theory of Computing, ACM Press (1990) 235–243