Hybrid Hard-Decision Iterative Decoding of Regular Low ... - IEEE Xplore

5 downloads 0 Views 208KB Size Report
algorithms, such as Gallager's algorithm B, where a comparable improvement is obtained by switching between different iterative decoding algo- rithms,.
250

IEEE COMMUNICATIONS LETTERS, VOL. 8, NO. 4, APRIL 2004

Hybrid Hard-Decision Iterative Decoding of Regular Low-Density Parity-Check Codes Pirouz Zarrinkhat, Student Member, IEEE, and Amir H. Banihashemi, Member, IEEE

Abstract—Hybrid decoding means to combine different iterative decoding algorithms with the aim of improving error performance or decoding complexity. In this work, we introduce “time) algorithms, and using density evolution invariant” hybrid ( show that for regular low-density parity-check (LDPC) codes and binary message-passing algorithms, algorithms perform remarkably better than their constituent algorithms. We also show that compared to “switch-type” hybrid ( ) algorithms, such as Gallager’s algorithm B, where a comparable improvement is obtained by switching between different iterative decoding algorithms, algorithms are far less sensitive to channel conditions and thus can be practically more attractive. Index Terms—Density evolution, hybrid decoding, iterative coding schemes, low-density parity-check (LDPC) codes, message-passing decoding algorithms.

I. INTRODUCTION

I

N THIS LETTER, we discuss the idea of combining existing iterative decoding algorithms in order to devise more effective algorithms. This is referred to as “hybrid algorithms.” Hybrid algorithms were first introduced by Gallager in the construction of his well-known algorithm B [1, p. 50]. We, more specifically, refer to algorithms such as Gallager’s algorithm B as “switch-type” hybrid , since they switch between different algorithms through the iteration process. In this work, we propose a new family of hybrid algorithms, called “time-in. These algorithms use a specific blend of variant” hybrid different algorithms in each iteration. The blend, however, does not change by iteration, hence, the name “time-invariant.” In the next section, restricting our attention to regular low-density parity-check (LDPC) codes [1], we will provide a formal definition for hybrid algorithms. Focusing on an important category of binary message-passing decoding algorithms, “majority-based” (MB) algorithms [2], and by using density evolution, we will algorithms over their conthen show the superiority of stituents. Finally, in Section III, we compare algorithms with algorithms, and demonstrate that in the absence of algorithms can handily beat the knowledge of the channel, algorithms in terms of error performance and decoding complexity (convergence speed). Our study also indicates that, algorithms, the optimal design of unlike the case for algorithms is universal in the sense that it is not very sensitive to Manuscript received September 4, 2003. The associate editor coordinating the review of this letter and approving it for publication was Dr. M. Fossorier. This work was supported in part by National Capital Institute of Telecommunications (NCIT), Ottawa, ON, Canada. The authors are with Broadband Communications and Wireless Systems (BCWS) Centre, Department of Systems and Computer Engineering, Carleton University, Ottawa, ON K1S 5B6, Canada (e-mail: [email protected]; [email protected]). Digital Object Identifier 10.1109/LCOMM.2004.827438

the knowledge of the channel parameter. Throughout the letter we follow [3] for basic definitions. II. HYBRID DECODING ALGORITHMS , the ensemble of -regular LDPC Consider codes of length , and suppose that are N message-passing algorithms which can be used to decode over a given channel [3]. Each algorithm , , is specified in iteration , , by and , its variable-node and check-node message maps, respectively. Assuming that all messages have the same alphabet , we can formulate a new message-passing decoding space as algorithm which is defined based on follows. In iteration of , variable-nodes and check-nodes are randomly partitioned into N groups according to probability and mass function vectors , respectively, and the nodes in group , , process the messages in accordance with the constituent algorithm . The new algorithm is said to according to hybridization be the hybrid of sequences and , and this is denoted by

We can symbolically specify

with message maps (1)

We focus in this letter on hybrid algorithms whose constituents belong to the family of MB algorithms. (Refer to the Appendix for an overview of this family). More precisely, we is decoded over a binary investigate the cases where symmetric channel (BSC), using1

One way of combining MB algorithms is to switch between them . The sequence in this case consists of elalgorithms, is notable. This ementary vectors. Among algorithm guaranties the fastest possible convergence of error probability to zero, by switching in each iteration to the MB algorithm that provides the greatest reduction in the error probrequires the exact value ability [1, p. 50]. To do so, however, of the channel parameter, which might not always be known. Another way of combining MB algorithms is to blend them together in a time-invariant manner . In this case, is independent of . It is shown in the Appendix that for a given 1Since all MB algorithms have the same check-node message map, we have ~ g from the notation. dropped f

1089-7798/04$20.00 © 2004 IEEE

ZARRINKHAT AND BANIHASHEMI: HYBRID HARD-DECISION ITERATIVE DECODING OF REGULAR LDPCs

251

TABLE I THRESHOLD VALUES FOR MB, , AND

H

Fig. 1. Graphical representation of density evolution [3] for C MB ; MB decoded by MB , MB , MB , MB , and MB ; ~ , over a BSC with crossover probability  : .

)

channel parameter , an , be represented by a simple curve

1 =H ( = 0 0720

H

ALGORITHMS

(8; 9), ; MB ;

, can . Similarly, the curve

(2)

(8 9)

represents the algorithm devised by blending MB algorithms according to vector . By an argument similar to that in [2], it can be shown that the average error probability of this algorithm converges to zero with iteration if and only if , . Example 1: Consider and the four MB algorithms that can be used to decode this ensemble over a BSC, namely , , , and . Also consider , the algorithm that blends these MB algorithms according to . Fig. 1 depicts the curves representing and its constituents for a channel with . One can see that the necessary and sufficient condition for the convergence of error probability to zero, i.e., , , is not satisfied for either of these MB algorithms. On the other hand, as the portrayed evolution of the expected fraction of incorrect messages for shows, the error probability for this algorithm converges to zero by increasing the number of iterations, and interestingly, there is still room for to cope with even worse channels. For every message-passing algorithm, the supremum of all channel parameters for which the error probability converges to zero, is called “threshold” [3]. For hybrid algorithms, the threshold can be maximized over all vectors. (It is easy to show that achieves the maximum threshold among all algorithms with MB constituents [1, p. 50]). Table I lists the threshold values of MB algorithms and the maximum threshold values of and algorithms for several pairs. As can be seen, both hybrid algorithms have significantly higher thresholds compared to their constituents, which is an indication of their superior performance.

H H = 0 0756

and Fig. 2. Comparison between the convergence speed of algorithms, for C ; and for   , : , 0.50, 0.75, 0.90, 0.95, : .) and 1.00 (left to right). (Both algorithms are set to achieve 

=

III. COMPARISON BETWEEN

= 0 25

AND

ALGORITHMS

Comparison of and values in Table I shows that algorithms are slightly inferior, in terms of performance, to their switch-type counterparts. They, however, as we will subsequently demonstrate, are more robust and can better cope with unknown or changing channel conditions. A. Unknown Channel Parameter Let us consider the case where the channel parameter is unknown and our main concern is to ensure the convergence of or algorithms for as wide a range of channel parameters as possible. Based on the fact that the curves representing MB algorithms are increasing in the channel parameter, it can be proved that the convergence of error probability of an or an algorithm to zero for a parameter implies its convergence to zero for every parameter . Therefore, the best strategy we can choose in this case is to set these algorithms for their maximum thresholds. The following example shows that for such a scenario, although the convergence of algorithms is faster than that of algorithms when the channel parameter is close to the maximum threshold, the latter can outpace the former for a wide range of parameters less than the maximum threshold. Example 2: Consider again . For this ensemble, the maximum threshold for the algorithm is obtained by using , while for the algorithm, it is achieved by starting with , and switchingto and

252

IEEE COMMUNICATIONS LETTERS, VOL. 8, NO. 4, APRIL 2004

IV. CONCLUSION A general framework for hybrid decoding algorithms is introduced. Using density evolution and by focusing on hybrid algorithms which are based on majority-based algorithms, we show the great improvement in threshold that “time-invariant” hybrid and “switch-type” hybrid algorithms can provide compared to their constituents. We also investigate the speed of convergence of hybrid algorithms and show that algorithms can outperform algorithms when the channel parameter is not perfectly known. This suggests that algorithms could be practically more attractive than the well-known (switch-type) Gallager’s algorithm B. APPENDIX

H

H

Fig. 3. Comparison between the convergence speed of and algorithms, for C (8; 9). Subscripts and superscripts of the curve labels represent actual and estimated channel parameters, respectively.

after iterations 19 and 29, respectively. For both algo. Assuming that rithms the maximum threshold is the channel parameter is unknown, we set both algorithms to achieve . Fig. 2 depicts error-rate curves versus iteration number, for the two algorithms and for several channel paramalgorithm outpaces the eters. As can be seen, the algorithm for all parameters less than, say, about 95% of . B. Underestimated Channel Parameter In this subsection, we compare the convergence speed of and algorithms for the case where both algorithms are set to have the fastest convergence,2 but due to changes in channel condition, or estimation errors, we underestimate the channel parameter, and hence the setting is suboptimal. The following example shows how different the two algorithms perform under such circumstances. Example 3: Consider again, and suppose that while , the actual channel parameter, is , we set and algorithms for their fastest convergence over a channel with (estimated) parameter . Fig. 3 compares the two algorithms. The subscripts and the superscripts of the curve labels represent and , respectively. As can be seen, while underestimation of the channel parameter can result in the failure of the algorithm to converge (see ), it has practically no effect on the performance of the algorithm. (Compare with ). Also comparing with shows that the performance of the algorithm designed to achieve the maximum threshold , on a channel with is almost as good as that of the algorithm particularly optimized for this channel. This, however, is not the case for the algorithm, as the comparison between and indicates. This demonstrates that the optimal design for algorithms is universal in the sense that the algorithm designed to achieve , performs close to optimal, in terms of convergence speed, on a wide range of channels with .3

H

2For the algorithm, optimal ~ is obtained by exhaustive search, while for the algorithm, G provides the optimal switching-sequence. 3We have tested other ensembles and have consistently observed the same general trends as in Examples 2 and 3.

H

, , and let the channel be Consider binary symmetric. For a nonnegative integer , the majority-based (MB) algorithm of order , denoted by , is defined by message maps

if otherwise where all messages, including the channel message , have an alphabet space of . Assuming a cycle free decoding neighborhood [3], it is shown that for a given channel parameter , the recursion (A-1) where the polynomials

and

are given by

describes the evolution of the expected fraction of erroneous messages with iteration number, if initiated by [1, p. 50]. This recursion can be represented by the curve (A-2) which is increasing in both and . It is shown in [2] that if and only if , . REFERENCES [1] R. G. Gallager, Low-Density Parity-Check Codes. Cambridge, MA: MIT Press, 1963. [2] P. Zarrinkhat and A. H. Banihashemi, “Density evolution and convergence properties of majority-based algorithms for decoding low-density parity-check (LDPC) codes,” in Proc. 2002 Allerton Conf., IL, USA, pp. 1425–1434. [3] T. J. Richardson and R. L. Urbanke, “The capacity of low-density paritycheck codes under message-passing decoding,” IEEE Trans. Inform. Theory, vol. 47, pp. 599–618, 2001.