(LDPC) Codes - IEEE Xplore

155 downloads 0 Views 339KB Size Report
Abstract—The effects of clipping and quantization on the per- formance of the min-sum algorithm for the decoding of low-density parity-check (LDPC) codes at ...
IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 53, NO. 4, APRIL 2005

549

Transactions Letters________________________________________________________________ On Implementation of Min-Sum Algorithm and Its Modifications for Decoding Low-Density Parity-Check (LDPC) Codes Jianguang Zhao, Farhad Zarkeshvari, Student Member, IEEE, and Amir H. Banihashemi, Member, IEEE

Abstract—The effects of clipping and quantization on the performance of the min-sum algorithm for the decoding of low-density parity-check (LDPC) codes at short and intermediate block lengths are studied. It is shown that in many cases, only four quantization bits suffice to obtain close to ideal performance over a wide range of signal-to-noise ratios. Moreover, we propose modifications to the min-sum algorithm that improve the performance by a few tenths of a decibel with just a small increase in decoding complexity. A quantized version of these modified algorithms is also studied. It is shown that, when optimized, modified quantized min-sum algorithms perform very close to, and in some cases even slightly outperform, the ideal belief-propagation algorithm at observed error rates. Index Terms—Clipping, iterative decoding algorithms, low-density parity-check (LDPC) codes, max-product algorithm, max-sum algorithm, min-sum algorithm, modified min-sum algorithms, quantization.

I. INTRODUCTION

F

OR THE PAST few years, there has been a great amount of research devoted to the analysis and design of iterative coding schemes, in general, and low-density parity-check (LDPC) codes [7], in particular. This has been mainly due to the excellent performance/complexity tradeoff that these schemes offer. Iterative decoding algorithms for LDPC codes can be naturally described by “message passing” on the Tanner graph (TG) [11] of the code. Depending on the decoding algorithm and the type of messages, certain computations are performed at the symbol nodes and the check nodes of the TG, and the algorithm is executed by exchanging messages between check nodes and symbol nodes through the edges of the graph, in both directions Paper approved by M. Fossorier, the Editor for Coding and Communication Theory of the IEEE Communications Society. Manuscript received June 25, 2002; revised August 1, 2003; March 28, 2004; and May 25, 2004. This work was supported in part by Zarlink Semiconductor Corp. (formerly Mitel Semiconductor Corp.), and in part by National Capital Institute of Telecommunications (NCIT). This paper was presented in part at IEEE Globecom, Taipei, Taiwan, November 17–21, 2002, and in part at the 21st Biennial Symposium on Communications, Queen’s University, Kingston, ON, Canada, June 2–5, 2002. J. Zhao was with the Broadband Communications and Wireless Systems (BCWS) Centre and Department of Systems and Computer Engineering, Carleton University, Ottawa, ON K1S 5B6, Canada. F. Zarkeshvari was with the Broadband Communications and Wireless Systems (BCWS) Centre and Department of Systems and Computer Engineering, Carleton University, Ottawa, ON K1S 5B6, Canada. He is now with the Department of Electronics, Carleton University, Ottawa, ON K1S 5B6, Canada (e-mail: [email protected]) A. H. Banihashemi is with the Broadband Communications and Wireless Systems (BCWS) Centre and Department of Systems and Computer Engineering, Carleton University, Ottawa, ON K1S 5B6, Canada (e-mail: [email protected]). Digital Object Identifier 10.1109/TCOMM.2004.836563

and iteratively. Decoding complexity per iteration is low, due to the sparseness of the TG. The best performing, yet the most complex, algorithm for decoding LDPC codes is known to be the “belief propagation” (BP) or “sum-product” [7], [11], [12]. In this letter, we focus on another well-known iterative decoding algorithm: “min-sum” (MS) [4], [11], [12]. Other common names for MS are “max-product” and “max-sum,” depending on the type of messages and the corresponding operations performed in symbol and check nodes. As for MS, the messages are log-likelihood ratios (LLRs) (for a description of the MS algorithm, see, e.g., [4]). Although, in general, the error performance of MS is a few tenths of a decibel inferior to that of BP, it is much simpler to implement, and no estimate of noise power is needed for an additive white Gaussian noise (AWGN) channel. Moreover, in this letter, we show that the implementation of MS is more robust against quantization error, compared with a similar implementation for the BP algorithm. In the rest of the letter, we discuss the effects of clipping and quantization on the performance of the MS algorithm. We observe that although quantization, in general, degrades the performance, clipping can provide improvement, and that in many cases, the overall effect is such that only four quantization bits results in near or even better than ideal performance over a wide range of signal-to-noise ratio (SNR). Note that the results of [10] code with block length 60 000 shows that at least for a ratesix bits are required for the implementation of the BP algorithm in the LLR domain. We also propose simple modifications (conditional and unconditional corrections) that can considerably improve the performance of MS with negligible cost in complexity. Interestingly, the quantized version of our modified MS algorithms can perform very close to, and in some cases, even slightly outperform, the ideal BP algorithm. (One should notice that this is theoretically possible on graphs with cycles, as BP is only optimal on cycle-free graphs.) A modification identical to our “unconditional correction” has been independently proposed by Chen and Fossorier in [4] and [5]. In [5], they have also studied quantized MS for regular LDPC codes. We discuss the results of Chen and Fossorier in more detail in the following sections. II. CLIPPING AND QUANTIZATION We consider the transmission of information over a binary . The received input AWGN channel using bipolar signaling vector at the output of the channel is processed by an MS decoder, and the decoding stops if the estimated vector is a codeword, or a maximum number of iterations is reached.

0090-6778/$20.00 © 2005 IEEE

550

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 53, NO. 4, APRIL 2005

To implement the MS algorithm, we consider the received , and values to be clipped symmetrically at a threshold then uniformly quantized in the range . There are quantization intervals, symmetric with respect to the origin, and each represented by quantization bits. Integer numbers are assigned to the intervals. Operations are performed on integers, and at bit nodes, an if it exceeds the outgoing message is clipped to threshold. As we will see in Section IV, clipping can improve the performance of the MS algorithm over a wide range of SNR, and with optimized , only four quantization bits provides near or even slightly better than ideal performance. Clipping, however, introduces an early error floor. The error floor, as well as the performance in the waterfall region, can be improved considerably by simple modifications to the MS algorithm, as presented in the next section. In [5], the effect of quantization on MS was studied by density evolution as well as simulations on a (3, 6)-regular (8000, 4000) code. It was also observed in [5] that clipping (as part of quantization) can improve the threshold of the MS algorithm and also can introduce an early error floor at finite block lengths. and study No effort, however, was made in [5] to optimize the variations of optimal with , SNR, and the choice of the code. In this letter, we show that although optimal is rather insensitive to and SNR, it varies considerably by the choice , one can observe of the code. In [5], due to nonoptimized abnormalities in the reported results for quantized MS applied to the (8000, 4000) code, e.g., the BER curve for outperforms the BER curve for . The clipping thresholds used for these cases are about and , respectively. We have reported simulation results for the same code in Fig. 3(c). Our results indicate that the optimized for this code, regardless of , is about 1.25 over a wide range of SNR values. Our optimized results also show a graceful improvement in the performance of quantized MS [for both bit-error rate (BER) and word-error rate (WER)] by increasing , and no sign of error floor for in BER and WER down to and , respectively. This is while the results reported for in [5] show an error floor starting at around and for BER and WER, respectively. Also, our optimized results for , and all outperform ideal MS (with no quantization) in the waterfall region. In [5], however, although the BER curve for outperforms ideal MS, the one for performs almost the same as ideal MS. Moreover, for WER, the trend changes, and while still performs more or less the same as ideal MS, , quantized MS performs much worse than ideal MS at for higher SNR values. Unlike these results, for our optimized results given in Fig. 3(c), BER and WER curves follow the same trends by changing . III. MODIFICATIONS TO MS For a given code and a given channel model, and a prescribed error rate, MS usually requires up to a few tenths of a decibel more transmitted power than BP. Inspection of the MS and BP algorithms in the LLR domain reveals that the two algorithms perform the exact same operation in bit nodes (addition), and the difference is only in the operations performed in check nodes. Suppose that we have a check node with degree 3, and let

Fig. 1.

Correction factor f (d) for MS algorithm.

and

be the input messages to with . Also, denote the output message by BP, we then have

and . For

(1) . The two messages while for MS, have the same sign, but a different magnitude [2]. Now, assuming (high SNR regime), in (1) can be simplified as (2) is plotted in Fig. 1. It is alThe function ways negative and larger than for . Comparing (2) to , one can see that the term , which is only a function of the magnitude of the difference between the incoming messages, is the correction factor that should be added to the magnitude of the output message of the check node in the MS algorithm to make it almost the same as that of BP. The precise implementation of is, however, complex. In this letter, we use the following simple stepwise approximation for the correction factor: if otherwise. The correction factor is applied to the magnitude of the outgoing MS messages from check nodes based on the following correction step (signs remain the same as those in conventional MS). denote 1) Conditional Correction: For each edge , let the set of extrinsic incoming messages. Also, use the notations and to denote the message with the smallest magnitude in , and the outgoing message along , respectively. Based on the MS algorithm, we have . The correction starts by removing from , and performing the following step iteratively until the set is empty. Randomly select a message from , and update with

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 53, NO. 4, APRIL 2005

, if , for some predetermined positive and nonnegative and , respectively. Otherwise, leave real numbers unchanged. Remove from . It is easy to see that in the above correction process, if , then the order in which the incoming messages are processed does not affect from the set . In fact, in this case, the correcthe final value of is applied at most once for each outgoing tion factor and are optimized through message. Parameters simulation. A modification of the conditional correction only once in 1) is to apply the correction factor if there exists a message in such that . We distinguish the two conditional corrections by labels A and B, where A refers to the one described in 1). While correction B is simpler to implement than A, our simulation results show that both corrections, when optimized, perform more or less the same. The following yet simpler correction is a spe(ascial case of conditional correction B when suming that all check nodes have degree at least three). 2) Unconditional correction: For each edge , the magnitude of the outgoing message is , for some predetermined positive real number . Based on our simulations, for coarse quantization (four bits and less), conditional correction has a clear advantage over the unconditional one. For a larger number of quantization bits, however, the two algorithms, when optimized, perform very closely. This makes the unconditional correction a more attractive choice due to its lower complexity. There has been some recent work to partially close the performance gap between MS and BP [1]–[6], [8]. In [1], [3], [6], and [8], the authors apply a corrective term to the output of check nodes in the MS algorithm, which is an approximation in (1). In [1] of and [6], the approximation is a constant which is nonzero over a region specified by not only the difference, but also the and , while in [3] and [8], lookup tables and sum, of piecewise linear approximations are also used. The approach of [2] is to normalize the output of check nodes, while in [4] and [5], an algorithm identical to our unconditional correction, called “offset BP-based,”1 has been independently proposed. In [4], the performance of offset MS for regular LDPC codes was analyzed using density evolution, and optimal correction paramLDPC codes, including eters for some ensembles of rate(3, 6)-regular codes, were obtained. The optimal correction parameter for (3, 6) codes was then used in the simulation of the (3, 6)-regular (8000, 4000) code, and results as close as about 0.05 dB to BP were reported. The application of density evolution to the quantized version of offset MS for regular LDPC codes was then presented in [5]. Numerical results for threshold values of these algorithms for different values of quantization steps , different numbers of quantization bits , and different offset values ,2 were also given in [5]. These results are for two rateensembles of (3, 6) and (4, 8) codes and no 1The “BP-based” algorithm is the same as MS. We use them in the rest of the letter interchangeably. 2Parameter , used in [5], is the same as y in our unconditional correction algorithm, presented in 2).

551

Fig. 2. BER curves for code I (—), II (- -) and III (- 1 -) decoded by ideal MS (), clipped MS (no character), and ideal BP ().

explicit effort in optimizing the clipping threshold was made. The values of and were, however, chosen in such a way that they can generate offsets equal to the optimal offset value in the continuous case. Simulation results for quantized offset , and ) applied to the (3, 6)-regMS ( ular (8000, 4000) code were also given in [5], and performance results that are about 0.1 dB away from BP were reported. This example, which has close-to-optimal parameters, is the only example provided in [5] on the application of quantized offset MS to finite-length codes. In this letter, we not only study MS with unconditional correction in more detail, but also introduce MS with conditional correction. For modified MS algorithms, and by extensive simulations on three short and intermediate-length LDPC codes, we obtain optimal clipping thresholds and correction parameters, and make the following observations: 1) MS with conditional correction outperforms MS with unconditional correction for coarse quantization (four bits or less); 2) optimal clipping threshold for modified MS algorithms, that is mainly a function of the code and is rather insensitive to and SNR, is different than that for the standard MS algorithm (for example, for the (8000, 4000) code, optimal changes from 1.25 to 2.5); 3) the early error floor caused by clipping in standard MS can be much improved using modified MS algorithms; and 4) for shorter codes, modified MS algorithms can outperform the BP algorithm. None of these observations was made in [5]. In particular, phenomena 3) and 4) are specific to finite-length codes and do not appear in the asymptotic density evolution analysis used in [4] and [5]. IV. SIMULATIONS AND DISCUSSIONS For simulations, we consider three codes, one irregular and two regular. The irregular code, constructed in [9], has parameters , with and being the block length and the dimension of the code, respectively. The regular codes, taken from [13], have parameters (273, 191) and (8000, 4000). For future reference, these codes are labeled as I [(1268, 456)

552

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 53, NO. 4, APRIL 2005

Fig. 3. (a)–(c) BER (—) and WER (- - -) curves of ideal and quantized MS for codes I, II, and III, respectively. (d) Effect of clipping threshold c performance of quantized MS for code I at E =N = 2 dB.

code], II [(273, 191) code], and III [(8000, 4000) code]. For all simulation results in this paper, the maximum number of iterations is chosen to be 200. Also, for each SNR, enough code words are simulated to result in 100 codeword errors. Moreover, when the error performances of different algorithms are compared, the corresponding decoders are set to operate in parallel on the same set of received vectors. To study the effect of clipping, the BER for the three codes versus under ideal (no clipping or quantization) and clipped (no quantization) MS are given in Fig. 2 ( and are the average energy per information bit and the power spectral density of AWGN, respectively). Results for the BP algorithm are also given as reference. For each code, the optimum clipping threshold (resulting in the smallest BER) appears to be almost a constant over a wide range of SNR. This is equal to , and for codes I, II, and III, respectively. As can be seen in Fig. 2, clipping closes part of the performance gap between MS and BP. This is due to upper bounding the overconfident reliabilities at the output of check nodes, which is expected to improve the performance of MS. Clipping, however, results in an early

on error

error floor at higher values of SNR, as can be seen for codes I and II in Fig. 2.3 To investigate the effect of quantization, BER and WER curves for the three codes resulting from quantized MS with different number of quantization bits ( ) are plotted in Fig. 3(a)–(c). The error-rate curves for ideal MS are also given as reference. For each code, the clipping threshold is fixed and is chosen to be 2, 1.5, and 1.25, for codes I, II, and III, respectively. It is observed that the optimum clipping threshold is almost the same for different values of , and over a wide range of SNR values. For code I, the effect of on BER for different number of quantization bits at dB is shown in Fig. 3(d). It can be seen that optimal is approximately equal to 2, regardless of . Fig. 3(a)–(c) show that by using , one can obtain near or better than ideal performance over a wide range of SNR values. In fact, since our simulation results 3It has been shown in [14] that for higher values of SNR, the optimal clipping threshold increases with SNR. Selecting a variable clipping threshold improves the error floor of the algorithm, compared with what is reported in this letter. The same thing applies to the quantized version of MS and modified MS algorithms.

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 53, NO. 4, APRIL 2005

Fig. 4.

553

BER curves of MS, BP, and modified MS algorithms for codes I (a), II (b), and III (c).

for five and six bits are almost identical, one would expect that for the number of bits larger than six, the results practically coincide with those for six bits in the observed range of SNRs. In [14], it is shown that, in general, the optimal increases with SNR. This improves the error-floor performance under quantized MS. For example, the results of [14] indicate that for code I, by increasing the threshold to its optimal values at higher , the error floors reported in Fig. 3(a) are improved by about an order of magnitude. The effects of conditional and unconditional corrections on ideal, clipped, and quantized MS for the three codes are studied with extensive simulation. For ideal MS, optimal values of and for conditional correction, and the optimal value of for unconditional correction, are rather insensitive to , and are mainly functions of the code. The modified algorithms considerably outperform MS and perform very close to, or in some cases even slightly outperform, BP. An example of this can be seen in Fig. 4(a) for code I, when unconditional correction with outperforms BP by about 0.2 dB at BER . For clipped MS, our simulations show that the best clipping threshold for all modified MS algorithms is almost the same, and

values. It is almost is rather constant over a wide range of equal to 3, 2, and 2.5 for codes I, II, and III, respectively. The optimal values of and are also rather insensitive to . The results for modified clipped MS in the waterfall region are very close to those of modified ideal MS. The former, however, still suffers from an earlier error floor, compared with the latter. This error floor, though, is lower than the original error floor for clipped MS [14]. The reason for this is the increase in the optimal clipping threshold for modified algorithms. For the quantized version of modified MS algorithms, it appears that the optimal for each code is almost constant over a wide range of and for different number of quantization bits, and is more or less the same as that of modified clipped MS algorithms, i.e., 3, 2, and 2.5, for codes I, II, and III, respectively. For a given number of bits and , the optimal values of and do not depend much on . Fig. 4 shows the BER curves of modified quantized MS algorithms with the best and parameters. For , we have reported the BER curves for conditional A and unconditional corrections. Results of ideal MS and BP are also given as reference. For code II, Fig. 4(b) is zoomed into the high-SNR region of the curves. The results for four bits

554

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 53, NO. 4, APRIL 2005

are not shown in this case, as they are more or less the same as those of standard MS in Fig. 3(b). Fig. 4(a) and (c) show that for four bits, conditional correction considerably outperforms unconditional correction. It can also be seen in Fig. 4(a)–(c) that modified quantized MS algorithms perform close to ideal values BP and even slightly outperform BP at higher for codes I and II. This is a significant improvement, compared with the quantized version of the conventional MS algorithm, particularly for codes I and III. As a final note, by comparing Fig. 3(a)–(c) with Fig. 4(a)–(c), one can see that although for the MS algorithm, four-bit quantization may be sufficient, this may not be the case for modified MS algorithms. In particular, Fig. 4(a) shows a considerable improvement in the error performance for code I, when is increased from 4 to 5. Similarly for code III, the increase from four to five bits results in about 0.2 dB gain in SNR.

V. CONCLUSION The effects of clipping and quantization on the MS algorithm are investigated. It is shown by simulating three LDPC codes that with the optimum clipping threshold, a four-bit uniform quantizer and four-bit messages provide near (sometimes even better than) ideal performance over a wide range of SNR values. We also propose modifications to the MS algorithm that can improve the performance significantly with a minor increase in complexity. At observed error rates, modified MS algorithms, even the quantized versions, perform very close to, or sometimes even outperform, BP with much less complexity. Our results indicate that at a given SNR, proper attention should be given to the choices of the clipping threshold and the required precision for obtaining the best performance/complexity tradeoff. These choices depend on the code and are related to the selected version of MS algorithm.

ACKNOWLEDGMENT The authors wish to thank the Editor and the anonymous reviewers for their helpful comments. REFERENCES [1] A. Anastasopoulos, “A comparison between the sum-product and the min-sum iterative detection algorithms based on density evolution,” in Proc. IEEE Globecom, San Antonio, TX, Nov. 2001, pp. 1021–1025. [2] J. Chen and M. Fossorier, “Near-optimum universal belief-propagation-based decoding of low-density parity-check codes,” IEEE Trans. Commun., vol. 50, pp. 406–414, Mar. 2002. [3] J. Chen, A. Dholakia, E. Eleftheriou, M. Fossorier, and X.-Y. Hu, “Nearoptimal reduced-complexity decoding algorithms for LDPC codes,” in Proc. IEEE Int. Symp. Information Theory, Lausanne, Switzerland, Jun. 30–Jul. 5, 2002, p. 455. [4] J. Chen and M. Fossorier, “Density evolution for two improved BP-based decoding algorithms of LDPC codes,” IEEE Commun. Lett., vol. 6, pp. 208–210, May 2002. , “Density evolution for BP-based decoding algorithms of LDPC [5] codes and their quantized versions,” in Proc. IEEE Globecom, Nov. 2002, pp. 1378–1382. [6] E. Eleftheriou, T. Mittelholzer, and A. Dholakia, “Reduced-complexity decoding algorithm for low-density parity-check codes,” IEE Electron. Lett., vol. 37, no. 2, pp. 102–104, Jan. 2001. [7] R. G. Gallager, “Low-density parity-check codes,” IRE Trans. Inform. Theory, vol. IT-8, pp. 21–28, Jan. 1962. [8] X.-Y. Hu, E. Eleftheriou, D.-M. Arnold, and A. Dholakia, “Efficient implementation of the sum-product algorithm for decoding LDPC codes,” in Proc. IEEE Globecom, San Antonio, TX, Nov. 2001, pp. 1036–1036E. [9] Y. Mao and A. H. Banihashemi, “A heuristic search for good low-density parity-check codes at short block lengths,” in Proc. IEEE Int. Conf. Communications, vol. 1, 2001, pp. 41–44. [10] L. Ping and W. K. Leung, “Decoding low-density parity-check codes with finite quantization bits,” IEEE Commun. Lett., vol. 4, pp. 62–64, Feb. 2000. [11] R. M. Tanner, “A recursive approach to low-complexity codes,” IEEE Trans. Inform. Theory, vol. IT-27, pp. 533–547, Sep. 1981. [12] N. Wiberg, “Codes and decoding on general graphs,” Ph.D. dissertation, Dept. Elec. Eng., Linköping Univ., Linköping, Sweden, 1996. [13] [Online]. Available: http://www.inference.phy.cam.ac.uk/mackay/ codes/data.html#s14 [14] J. Zhao, “Effects of clipping and quantization on min-sum algorithm and its modifications for decoding low-density parity-check codes,” M.Sc. thesis, Dept. Syst. Comput. Eng., Carleton Univ., Ottawa, ON, Canada, 2003.