Energy-Efficient Data Dissemination for Wireless Sensor ... - CiteSeerX

2 downloads 594 Views 228KB Size Report
If chunks get lost during the transmission, additional chunks are sent such that all receivers are able to recover missed parts and even- tually decode the data.
Energy-Efficient Data Dissemination for Wireless Sensor Networks Marcel Busse, Thomas Haenselmann, and Wolfgang Effelsberg Computer Science IV - University of Mannheim, Seminargeb¨aude A5 D-68159 Mannheim, Germany {busse, haenselmann, effelsberg}@informatik.uni-mannheim.de

Abstract In order to disseminate a large amount of data through a sensor network, it is common to split the data into smallsized chunk packets. If the data is additionally encoded by a forward error correction (FEC) code, missed chunks can be recovered. Fountain codes are a special kind of FEC code which have the property that the sender provides the data in a virtually endless stream by combining original chunks at random. No matter which chunks get lost, each receiver only needs any k chunks from the stream. In broadcast scenarios, fountains have the additional advantage that only little redundancy is required, even if several receivers have missed different chunks. We will show the benefit of Fountain codes in wireless sensor networks in comparison to raw transmissions and other FEC codes. To support the actual data dissemination, we propose two generic distributed protocols: an acknowledgement-based and a request-based protocol. The evaluation is carried out in a real testbed consisting of 20 sensor nodes.

1. Introduction Consider a network of wireless sensor nodes that are capable of sensing, processing, and communicating [1]. In general, the detection of an event or a particular stimulus is first processed by a node and then forwarded to one or several sink nodes in the network. Besides this n-to-1 communication we will concentrate on 1-to-n communication where the sink node sends queries or control packets to a fraction or even all sensor nodes. In particular, we consider the case of bulk data transmissions, as they are necessary for large queries that, e.g., carry image information for video sensor nodes [3] or for new firmware releases to be distributed to the nodes [2, 4]. Since such data is too large to be sent at once, the data is divided into small-sized chunk packets. Sending chunks over the network has the advantage that in case of packet loss only lost or erroneous chunks must be resent. However, if the data is broadcasted to multiple receivers, it is likely that most of the retransmitted chunks will be redundant to most of the nodes. Minimizing the number of redundant packets would reduce the

energy costs of the sensor network significantly and thus extends its operational lifetime. One possibility to achieve a reduction in the number of packet transmissions is to use specific coding schemes. First, all original chunks are encoded by the sender before they are broadcasted to one or more receivers. If chunks get lost during the transmission, additional chunks are sent such that all receivers are able to recover missed parts and eventually decode the data. The advantage of encoding is that additional chunks may be useful for more than one receiver, even if the receivers have missed different chunks. Note that in this case forward error correction is not intended to correct bit errors in single chunks but to recover completely missing ones. Commonly, such losses are named erasures. Classic erasure-correction codes are Reed-Solomon (RS) codes [11, 5]. An (n, k) Reed-Solomon block code encodes k symbols into n symbols with redundancy n−k. A receiver of such a block code would then be able to decode the original k symbols if any k of the transmitted n symbols are received without errors1 . Thus, even if two receivers might have missed different symbols, only one additional symbol might be sufficient for both in order to decode the original ones successfully. However, a disadvantage of ReedSolomon codes is, besides its high computational complexity, the fixed employed redundancy n − k as it limits the number of symbols that may get lost. Fountain codes [7, 6] overcome this disadvantage by providing an endless data fountain. Similar to Reed-Solomon codes they have the property that the original data can be decoded from any kˆ chunks, with kˆ being only slightly larger than k. Especially for large k, there exist fountain codes with small encoding and decoding complexities [15]. The remainder of the paper is organized as follows. In the next section, we will describe how packets are encoded into chunks by using a Reed-Solomon code, a random linear fountain code, and a Raptor code. Section 3 then proposes two generic protocols for data dissemination. Results from a real-world evaluation are presented in Section 4. Finally, Section 5 concludes the paper. 1 Therefore, the erasure positions must be known. Otherwise, k + symbols must be received.

n−k 2

2. Coding Schemes 2.1. Reed-Solomon Codes Reed-Solomon (RS) codes belong to the class of systematic linear block codes [11, 5]. Each block is divided into several m-bit symbols with a symbol size of typically 3 to 8 bits. An RS(n, k) code with k original symbols encoded into n m-bit symbols per code block has the redundancy n − k. If only erasures occur, an RS decoder will be able to decode a codeword if it receives k arbitrary symbols from the code block. Commonly, RS codes are only applied to individual packets that need to be transmitted over a noisy wireless channel in order to account for bit errors [8]. However, it is also possible to spread the code over several chunks of size l [9]. Consider the data block of size lk depicted in Figure 1 that is divided into k chunks c1 . . . ck consisting of l m-bit symbols. The symbol size m is assumed to be 8 bits. The chunks are encoded by using l parallel RS(n, k) encoders that generate parity chunks p1 . . . pn−k carrying redundant information. Therefore, k symbols at the same position i within a chunk are considered in order to generate the appropriate parity symbols. Interleaving all parity symbols finally leads to n−k parity chunks of the same size. It should be noted that even if the data block consists of only lk˜ bytes with k˜ < k, the same RS(n, k) encoders can be applied by using k − k˜ padding chunks. If both the encoding and de˜ these padding chunks need not coding side are aware of k, be transmitted. After the encoding process, a sender which is intended to transmit the data block to one or several receivers first sends the original data chunks c1 . . . ck . If a receiver misses, e.g., d chunks, not the same chunks are retransmitted. Instead, the sender sends d parity chunks. However, in case d is greater than the number of parity chunks, the receiver will not be able to decode the received chunks at all if no original chunk is resent. Thus, an important question is which of the original blocks should be resent without sending too much redundancy. This issue is easily solved by fountain codes that we consider in the next section.

Figure 1. An RS-Encoded Data Block fountain (RLF) codes, which have nearly optimal properties concerning the Shannon limit [14]. Given a set of k chunks c1 . . . ck , an RLF encoder randomly combines kˆ chunks with kˆ ≤ k to one output chunk cˆi that is sent to one or several receivers afterwards. Combining two chunks is done by an exclusive-or operation as shown in Figure 2. Which chunks are combined is determined by the generation matrix G. The encoder generates a bit vector of k random bits for each output chunk cˆi stored at row i in G. The transmitted output chunk is then cˆi =

k X

cj GTij .

(1)

j=1

Figure 3 shows an example of such a binary generation matrix G with k columns and n0 ever-growing rows. By transmitting the bit vectors or by using a pseudo-random generator where the key seed is known to the sender as well ˆ can be as to the receivers, the same generation matrix G constructed, as shown on the right-hand side in Figure 3 from the point of view of a receiver. Note that some chunks are assumed to get lost during the transmission, indicated ˆ may be of different size than G. by red rows. Thus, G Decoding the received chunks cˆ1 . . . cˆn is only possible for n ≥ k. Otherwise, the receiver has not received enough ˆ is invertible (modulo 2), G ˆ −1 can chunks. If n = k and G 3 be computed by, e.g., Gaussian elimination . In this case, the original chunks c1 . . . ck can be computed from ci =

k X

ˆ −1 . cˆj G ij

(2)

j=1

2.2. Random Linear Fountain Codes Fountain codes produce an endless stream of output symbols of equal size for a set of k input symbols of arbitrary size. Since the symbol size does not influence the code, a symbol can also be a complete chunk packet. For decoding and recovering the original k symbols it is sufficient (with a high probability) to receive any set of n output symbols2 . Good fountain codes require n to be only slightly larger than k and are able to achieve a decoding time that is linear in k. The simplest form of fountain codes are random linear 2 In

the following, we refer to chunk packets instead of symbols.

ˆ is invertible if and only if it contains k linearly indeG pendent rows or columns, respectively. Linearly dependent vectors identify chunks that are redundant. Thus, at the receiving side, the decoder skips all chunks that show a linear dependency to already received bit vectors until n = k. ˆ can be inverted, and the deThen, the generation matrix G coder is able to recover the original chunks c1 . . . ck . Unlike for RS(n, k) codes, fountain codes are independent of the loss ratio on the wireless channel. Since the encoder generates an endless output stream, the receiver will 3 In

practice, an LU decomposition algorithm is used.

Figure 2. Encoding is Done by Combining Original Chunks Randomly eventually be able to decode the received chunks. It may only take more time if the loss ratio increases. In practice, a receiver should somehow indicate that the decoding process was successful in order to stop the fountain. It might be interesting how many excess (redundant) chunks n − k a decoder will receive on average, even if no packet loss occurs. That is, what is the probability to receive e excess chunks? MacKay shows in [7] that the probability p(e) that a receiver will not be able to decode the received chunks after e excess chunks have been received is bounded by p(e) ≤ 2−e (3) for any k. Thus, the probability that a receiver needs more than 6 excess chunks is less than 0.01. However, the probability that a receivers can decode the chunks already after k chunks is only 0.289.

2.3. Raptor Codes Although random linear fountain codes can get arbitrarily close to the Shannon limit, they have the disadvantage that their decoding costs are cubic, caused by the calculaˆ −1 . As long as k is small, it is not an important tion of G issue. However, for large k a more suitable solution would be preferable. Luby presents such a code in [6]. The code is called LT code and has an almost linear decoding complexity. The idea is to minimize the density (the number of ones) in the generation matrix G. Low-density codes have the advantage that the decoding can easily be performed by using an algorithm called belief propagation [10], which is also known as the sum-product algorithm. Luby proposes a special probability distribution from which the density of outgoing bit vectors is chosen. The design of this distribution is critical since it heavily affects the quality of the code. For most chunks the density must be low in order to start the decoder. But occasionally, the density must be high enough so that all original chunks are covered by outgoing chunks. Otherwise, the decoder needs a long time as recovering of uncovered chunks is not possible. The latter one is taken into account by Raptor codes that tries to minimize the probability uncovered chunks occur. Raptor (RT) codes [15] extend LT codes by an additional outer code. The inner code is an arbitrary LT code with a common average density of about 3. This ensures that the

Figure 3. The Encoding and Decoding Generation Matrix decoder is not blocked and can get started. However, due to the low density, it is likely that not all original chunks are covered by the inner code, keeping receivers from decoding the chunk block. Thus, the idea is to use the outer code to recover such uncovered chunks. Good outer codes are for example irregular low-density parity-check codes [12].But any other code could be used, too.

3. Data Dissemination Protocols While encoding data might be able to avoid redundant transmissions, we still need a protocol to disseminate bulk data in the network. The protocol should be distributed and must account for multi-hop transmissions since it is not likely that every receiver is within the transmission range of the data source. In this section, we propose two such protocols. The first one is based on unicast communication and acknowledgements while the second one is request-based and sends data by broadcast without the need of acknowledgements.

3.1. Acknowledgement-Based Data Dissemination The acknowledgement-based data dissemination protocol is similar to traditional unicast communication. Data chunks are addressed and sent to only one receiver, which acknowledges each chunk packet received correctly. In doing so, a receiver should get the complete set of data chunks since chunks which have not been acknowledged are resent. In such a case, it will depend on the encoding scheme if the same chunk or another encoded chunk is sent. Although the communication is unicast concerning its addressing, the actual transmission is broadcast due to the broadcast characteristics of the wireless medium. Thus, other receivers within the transmission range are able to receive the chunks in promiscuous mode. Since the chunks are addressed to another receiver, they need not send acknowledgements. After the actual receiver has obtained all data chunks it is likely that other receivers in the neighborhood also got most of the chunks by overhearing. The complete acknowledgement-based protocol looks as

follows. First, the data source broadcasts an arbitrary number of data chunks. A chunk contains an identifier that is generated by the original data source. In addition, it includes a sequence number and the total number of chunks required to reconstruct the original data. All nodes interested in the data answer with an acknowledgement indicating the next chunk they need. In order to avoid collisions, an acknowledgement is sent only after a random backoff time. Once an acknowledgement is received by the source node, it starts sending data chunks to the appropriate node until all chunks are acknowledged. Other nodes interested in the same data overhear the chunks and keep track of missed or erroneous ones. After the source node has stopped sending and the channel becomes free, nodes that need further chunks to complete set a random backoff timer again and send back another acknowledgement indicating missed chunks as soon as the timer expires. If all receivers are within the transmission range of the data source, no further overhead is required. However, in order to account for multi-hop networks, it is necessary that additional nodes also act as data sources in order to cover distant nodes. Therefore, as soon as a node has received the complete data successfully and the medium becomes free, it broadcasts some data chunks itself in order to find uncovered neighbors. Again, to avoid collisions, backoff timers are used. Since it is not assumed that the network topology is known, unfortunately, each node must try to find uncovered nodes even if this might cause high redundancy.

3.2. Request-Based Data Dissemination The acknowledgement-based protocol has the advantage that data chunks will be resent automatically if they could not be received successfully. No additional communication overhead is required in this case. However, the overhead of acknowledgements which need to be sent is quite high, especially if only few packets get lost. Thus another idea would be to use a request-based protocol instead. Like before, the data source starts by broadcasting some chunks in order to find interested nodes. But after the first backoff timer expires, a request is sent back containing a list of chunk IDs required by the appropriate node4 . In addition, each node which has discovered a new data stream sets a request timer for the case that requests get lost or no further chunks are broadcasted by the source. Once the source node has received a request, it starts broadcasting the requested chunks without any further communication overhead. Afterwards, it waits for additional requests that restart the data stream. Since each node holds a pending request timer until it receives all chunks correctly, missed chunks will either be requested by the same node or later by other nodes. 4 Depending on the coding scheme it may not be necessary to include a complete list but only the number of requested chunks.

Concerning multi-hop networks, the protocol works like the acknowledgement-based one. Each node that received the complete data block broadcasts some chunks to find uncovered nodes and acts as an additional data source. Thus, requests should also be sent by broadcast such that they can be answered by any data source in the neighborhood.

4. Evaluation We evaluated the acknowledgement- and request-based data dissemination protocols in a real sensor network using an indoor testbed. The evaluation was performed for raw transmission (non-coding), an RLF code, an RS(255, 223) code, and a Raptor code. However, due to its higher overhead, the acknowledgement-based protocol is used by noncoding only. The testbed consists of 20 Embedded Sensor Boards (ESB) [13] that are placed in a 4 × 5 grid with a distance between two nodes of 0.6 m. The ESB nodes are equipped with an MSP430 microcontroller, run at 8 MHz and contain 64 kB memory with 2 kB RAM. For wireless communication, a TR1001 radio transceiver is used.

4.1. Single-Hop Evaluation At first, we analyze a single-hop environment where 6 receiving nodes are placed within the communication range of one source node. The source is placed in a corner of the grid and periodically generates data chunks with a rate of 3 chunks/sec. The complete data block consists of 20 chunks, which have a size of 256 bytes. To obtain stable results, we repeated each experiments 250 times. As we are for a moment only interested in single-hop effects, the source node sends data chunks only. In the next section, we will also analyze the effects if other nodes forward data throughout the network. Furthermore, in order to maintain the single-hop environment, all nodes use their maximum transmission power. Packet loss is produced artificially by dropping packets randomly after the reception at a predefined loss ratio, which is increased from zero to 0.9 during the experiment. Figure 4 presents the obtained results. The average number of sent control packets (acknowledgements and requests) is depicted in Figure 4(a) for different loss ratios. As shown, using acknowledgements causes more overhead than using requests except for very high loss ratios. That is due to the fact that each chunk packet is acknowledged by a receiver. While the loss ratio is low, it is therefore more efficient to request chunks all at once. However, if the loss ratio becomes higher, missed chunks cause many requests while in the acknowledgement-based protocol they are resent automatically. However, only the non-coding case shows such an exponential increase in the number of sent requests. If FEC is employed, the number of sent control packets remains nearly constant because missed chunks need not be

Number of Received resp. Overheard Packets

25 Number of Sent Packets

700

Non−Coding (Acks) Non−Coding (Requests) RS RLF RT

20

15

10

5

0

25

Non−Coding (Acks) Non−Coding (Requests) RS RLF RT

600 500 400 300 200

0.1

0.2

0.3

0.4

0.5 0.6 Loss Rate

0.7

(a) Sent Control Packets

0.8

0.9

1

15

10

5 100 0

0

Non−Coding (Acks) Non−Coding (Requests) RS RLF RT

20 Number of Excess Chunks

30

0 0

0.1

0.2

0.3

0.4

0.5 0.6 Loss Rate

0.7

0.8

0.9

(b) Received and Overheard Packets

1

0

0.1

0.2

0.3

0.4

0.5 0.6 Loss Rate

0.7

0.8

0.9

1

(c) Required Excess Chunks

Figure 4. Evaluation Results from the Single-Hop Experiment requested but can be recovered by other chunks. Due to encoding, different receiving nodes can benefit from the transmission of one additional chunk even if they lost different ones. In contrast to that, if no coding is used and chunks get lost, it becomes more likely that most nodes request different chunks, increasing the number of sent requests significantly. The average number of received and overheard packets, which includes control and chunk packets (also erroneous packets), are shown in Figure 4(b). It can be used as an indicator for the channel utilization and energy consumption. Again, encoding data shows substantial improvements over non-coding. The best results are achieved by the RS and RLF codes which both require about 50% fewer packets. The RT code performs worse (but still better than noncoding) since its decoding algorithm requires more chunks than the RLF code in order to start and complete. As long as the data block consists of only few chunks (even less than 1000) as in our experiment that overhead can be quite significant. The difference in the acknowledgement- and request-based protocol is also observable concerning the channel utilization. Up to a loss ratio of about 0.7, acknowledgements cause more overhead. Then, the number of sent requests becomes dominant leading to more received packets. Figure 4(c) shows the number of excess chunks, i.e., the number of chunks received before the decoding can start. In case of non-coding, it shows the number of received chunks until the data block is complete. It thus indicates how fast an application can process the data block and how long a node must listen to ongoing transmissions on the wireless channel. While both the RLF and RT codes show almost a constant behavior which is independent of the loss ratio, the RS code needs excess chunks as soon as the loss ratio exceeds 0.65 . As long as not all 52 encoded chunks are sent (20 data chunks plus 32 redundant chunks), it is likely 5 Since for a loss ratio above 0.6, fewer than 20 encoded chunks are expected to be received.

that multiple nodes benefit from an additionally sent chunk. But if all chunks are already sent, identical chunks need to be resent causing more redundancy at the receiving sides. However, in case the data block would consist of more than 20 chunks, that threshold would be smaller. Concerning the non-coding case, we need about 10 times the number of excess chunks as compared to the RLF code. As additional chunks are likely to be useful to one or only few nodes, most of the other nodes must wait until the right chunk is resent. Thus, the more chunks get lost, the longer it takes until a node has received the complete data block.

4.2. Multi-Hop Evaluation The evaluation results for the multi-hop environment are shown in Figure 5. This time, the transmission power is varied from 0.1 to 1.0, where 1.0 represents the node’s maximum transmission power. Figure 5(a) depicts the total number of sent packets (including chunks as well as control packets) averaged over all nodes in the network. For high transmission powers only little packet loss occurs, and encoding data shows no benefit over non-coding. It even performs worse if a node requires more chunks in order to start or finish the decoding. Similar to Figure 4, using acknowledgements causes additional overhead. But unlike before, the acknowledgement-based protocol does not outperform the request-based one by automatic retransmissions in case of high loss ratios. Due to the multi-hop environment, it may occur that a request is answered by a node that is far away. Thus, chunks sent by such a node may not reach the requesting node because of link asymmetries. In this case, automatic retransmissions cause even more overhead. The RLF code outperforms the RS as well as the RT code for low transmission powers as both need more redundancy for packet decoding. However, if the transmission power is higher, the RS code needs less overhead and performs slightly better because less redundant packets are required. According to Figure 5(a), Figure 5(b) shows the num-

Number of Received and Overheard Packets

20 Number of Sent Packets

300

Non−Coding (Acks) Non−Coding (Requests) RS RLF RT

15

10

5

0

35

Non−Coding (Acks) Non−Coding (Requests) RS RLF RT

250

200

150

100

50

0.1

0.2

0.3

0.4 0.5 0.6 Transmission Power

(a) Sent Packets

0.7

0.8

0.9

1

25 20 15 10 5

0 0

Non−Coding (Acks) Non−Coding (Requests) RS RLF RT

30 Number of Excess Chunks

25

0 0

0.1

0.2

0.3

0.4 0.5 0.6 Transmission Power

0.7

0.8

0.9

(b) Received and Overheard Packets

1

0

0.1

0.2

0.3

0.4 0.5 0.6 Transmission Power

0.7

0.8

0.9

1

(c) Required Excess Chunks

Figure 5. Evaluation Results from the Multi-Hop Experiment ber of received and overheard packets. In comparison to Figure 4(b), the benefit of encoding over non-coding is significantly smaller because the packet loss is more heterogenous than in Section 4.1, especially for low transmission powers. Nodes having bad reception characteristics require more chunks to be sent. However, if other nodes in the neighborhood have yet received most of the data chunks, encoding is beneficial to only few nodes. Thus, the encoding gain slightly vanishes. In the worst case, only one node is left that needs additional chunks. Non-coding would then perform even better because no redundancy would be needed. Sending the missed chunks again would be enough. Concerning the number of excess chunks, Figure 5(c) shows similar results as Figure 4(c) except for the acknowledgement-based protocol. As mentioned above, much network overhead is caused due to automatic retransmissions if an acknowledgement could not be received. As the likelihood of asymmetric links increases with a lower transmission power, employing automatic retransmissions becomes even worse and increases the number of redundant chunks.

5. Conclusions Using FEC codes for the dissemination of a large amount of data is able to reduce the number of redundant packets significantly, especially in lossy environments. The improvement over non-coding is even higher if most of the receiving nodes exhibit homogenous loss ratios. Among the evaluated FEC codes, the RLF and the RS codes show the best performance. For low packet loss ratios, the RS code needs less redundancy and thus performs slightly better. However, if the loss ratio increases, it suffers from resending identical data chunks as the number of redundant chunks is fixed and predefined by the code. The latter is avoided by RLF and RT codes where each chunk is always a random combination of the original ones. However, due to the low number of chunks a data block commonly consists of, the RT code performs worse than the RLF code.

In conclusion, using RS and RLF codes for the data dissemination is quite reasonable as it can be expected that wireless sensor networks will suffer from substantial loss ratios due to its low-cost and low-powered hardware. While the RS code performs better for moderate packet loss, the RLF code shows better results in highly lossy environments.

References [1] I. Akyildiz, W. Su, Y. Sankarasubramaniam, and E. Cayirci. A Survey on Sensor Networks. IEEE Communications Magazine, 40(8):102–114, Aug. 2002. [2] J. W. Hui and D. Culler. The Dynamic Behavior of a Data Dissemination Protocol for Network Programming at Scale. In Proceedings of ACM SenSys, Baltimore, MD, USA, Nov. 2004. [3] P. Kulkarni, D. Ganesan, P. Shenoy, and Q. Lu. SensEye: A MultiTier Camera Sensor Network. In Proceedings of the ACM International Conference on Multimedia, Singapore, Nov. 2005. [4] S. S. Kulkarni and L. Wang. MNP: Multihop Network Reprogramming Service for Sensor Networks. In Proceedings of IEEE ICDCS, Columbus, OH, USA, Jun. 2005. [5] S. Lin and D. J. Costello. Error Control Coding: Fundamentals and Applications. Prentice Hall, 1983. [6] M. Luby. LT Codes. In Proceedings of IEEE FOCS, Vancouver, BC, Canada, Nov. 2002. [7] D. J. C. MacKay. Information Theory, Inference, and Learning Algorithms. Cambridge University Press, 2003. [8] A. J. McAuley. Reliable Broadband Communications using a Burst Erasure Correcting Code. In Proceedings of ACM SIGCOMM, Philadelphia, PA, USA, Sep. 1990. [9] J. Nonnenmacher, E. W. Biersack, and D. Towsley. Parity-Based Loss Recovery for Reliable Multicast Transmission. IEEE/ACM Transactions on Networking, 6(4):349–361, Aug. 1998. [10] J. Pearl. Fusion, Propagation, and Structuring in Belief Networks. Artificial Intelligence, 29(3):241–288, Sep. 1986. [11] I. Reed and G. Solomon. Polynomial Codes Over Certain Finite Fields. SIAM Journal on Applied Mathematics, 8(2):300–304, 1960. [12] T. Richardson, M. A. Shokrollahi, and R. Urbanke. Design of Capapcity-Approaching Irregular Low-Density Parity Check Codes. IEEE/ACM Transactions on Information Theory, 47(2):619– 637, Feb. 2001. [13] J. Schiller, A. Liers, H. Ritter, R. Winter, and T. Voigt. ScatterWeb Low Power Sensor Nodes and Energy Aware Routing. In Proceedings of HICSS, Hawaii, USA, Jan. 2005. [14] C. E. Shannon. A Mathematical Theory of Communication. Bell System Technical Journal, 27:379–423, 623–656, Jul./Oct. 1948. [15] A. Shokrollahi. Raptor Codes. IEEE/ACM Transactions on Networking, 14(SI):2551–2567, Jun. 2006.