decoding methods - Communications, IEEE ... - IEEE Xplore

85 downloads 0 Views 282KB Size Report
Abstract—Soft-decision maximum-likelihood decoding of con- volutional codes over GF(qq) can be accomplished via searching through an error-trellis for the ...
IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 47, NO. 7, JULY 1999

1015

Error-Trellises for Convolutional Codes—Part II: Decoding Methods Meir Ariel and Jakov Snyders

Abstract—Soft-decision maximum-likelihood decoding of convolutional codes over GF(q ) can be accomplished via searching through an error-trellis for the least weighing error sequence. The error-trellis is obtained by a syndrome-based construction. Its structure lends itself particularly well to the application of expedited search procedures. The method to carry out such error-trellis-based decoding is formulated by four algorithms. Three of these algorithms are aimed at reducing the worst case computational complexity, whereas by applying the fourth algorithm, the average computational complexity is reduced under low to moderate channel noise level. The syndrome decoder achieves substantial worst case and average computational gains in comparison with the conventional maximum-likelihood decoder, namely the Viterbi decoder, which searches for the most likely codeword directly within the code. Index Terms— Convolutional codes, error-trellis, maximumlikelihood decoding, syndrome decoding.

I. INTRODUCTION

F

OR a given coset of a convolutional code , is a directed graph that represents all an error-trellis . A companion paper [1] the sequences belonging to describes a syndrome-based construction of error-trellises for convolutional codes over GF( ). In this paper we present soft decision maximum likelihood (ML) decoding methods that utilize error-trellises. Substantial gain in computational complexity is achieved by exploiting the properties of such trellises. The syndrome decoding scheme consists of the following steps. Symbol-by-symbol detection applied to the received sequence produces the sequence of most likely -ary symbols. is uniquely characterized by the syndrome The coset corresponding to . Then an error-trellis representing is constructed. This step is performed section by section, according to the values of segments of the syndrome, in a way that the identity of the various states along the trellis is known. The advance knowledge of the states has a great importance. It enables the design of efficient coset-search, which is the main step of the decoding procedure. This search, among the paths of , is aimed to identify the error sequence which corrects , i.e., the one such that is the most likely codeword. Paper approved by M. Fossorier, the Editor for Coding and Communication Theory of the IEEE Communications Society. Manuscript received December 23, 1997; revised November 24, 1998. The authors are with the Faculty of Engineering, Department of Electrical Engineering-Systems, Tel Aviv University, Tel Aviv 69978, Israel (e-mail: [email protected]; [email protected]). Publisher Item Identifier S 0090-6778(99)05226-5.

Error-trellis decoders, presented in [2] and in this paper, combine the compact description of convolutional codes by means of a trellis diagram with the efficiency of the soft decision syndrome decoder. This strategy first appeared in [3], where it was aimed at hard decision decoding of convolutional codes. A similar approach was taken in [4] for devising syndrome decoders for linear block codes. Various types of ML and suboptimal decoders based on the coset-search approach are presented in several papers [5]–[10]. A complementary type of reliability-based decoder performs code-search [11], [12]. A decoding method that employs a mixture of codesearch with coset-search is presented in [13]. None of the decoders [5]–[13], however, are based on trellis diagrams. An approach to efficient ML code-search was established by the introduction of code-trellises for coset codes [14] and has gained widespread use. Among the more recent contributions to the topic of trellis-based code-search, there is a construction of code-trellises for convolutional codes on the basis of the scalar check matrix [15] and a construction of code-trellises for block coset codes with the aid of convolutional encoders [16]. Throughout this paper, transmission through a -ary input memoryless channel will be considered. We shall regard the paths through an error-trellis as representing sequences of deviations, i.e., errors, from a particular coset member. For the purpose of ML decoding, it turns out to be advantageous to use as the reference coset member. Accordingly, we shall label the branches of an error-trellis by error symbols with their associated soft weights (also called reliabilities in the binary case) rather than output symbols with their associated values of the log likelihood function. Under this convention, various paths connecting between nodes at a given pair of depths have, in general, different a priori probabilities. This fundamental quality of error-trellises is utilized for expediting the search through the trellis. Details are provided in Section III. The paper is organized as follows. The principles of ML soft decision syndrome decoding of convolutional codes over are presented in Section II. In Section III, the various GF methods to expedite the ML soft decoding are formulated, and the advantages of employing those methods are clarified. Simulation results are provided in Section IV. We remark that many of the ideas of efficient error-trellis ML decoding of convolutional codes presented herein appeared or can be deduced from our earlier paper [2]. This paper, however, describes the entire method in a readily accessible manner, whereby the means is provided for a systematic design of ML decoders suited to all convolutional codes. The various techniques to simplify the trellis and

0090–6778/99$10.00  1999 IEEE

1016

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 47, NO. 7, JULY 1999

Fig. 1. Soft syndrome decoding of convolutional codes.

expedite the syndrome decoding are formulated in terms of four algorithms and supported by several examples. In addition, the description of the paper generalizes the binary case of [2] to -ary codes. Thus, the results of this paper compensate and enhance the results of [1] and [2].

element of this coset, called an error vector, will be selected , we have to correct : for any where the subtraction is over GF . and let the function Denote GF be defined by (4)

II. SOFT SYNDROME DECODING APPROACH Let an ( ) convolutional code over GF be defined by , where a polynomial check matrix are matrices over GF and . Then a truncated version of the code (including the tail bits) scalar check matrix is specified by the following

(1)

Then for a given received (5) is the “cost” of preferring the candidate to as the value of the code symbol . (In the is equivalent to , where binary case [2], is the log-likelihood the weight of the th error symbol. The ratio.) We call is defined to be . weight of any set The ML decoding can be accomplished by setting , where is the least weighing (in the binary case, least reliable) error vector, i.e., satisfies

where

and

the truncated code. Consider transmission of Denote by through a -ary a codeword input memoryless channel with transition probability density GF , where stands for functions denote the sequence the real line. Let of symbols received at the output of the channel. Symbolyields the vector by-symbol detection applied to , consisting of the most likely -ary symbols positions. in each of the ML decoding amounts to identifying the codeword which maximizes (2) Observe that symbol-by-symbol detection yields the global , i.e., maximum of (3) is equivalent to minimization Therefore, maximization of over all . of Let the -tuple syndrome be given , where the superscript stands for transposition. by of is specified by the value of . An The coset

(6)

by performing a coset-search is This procedure to obtain the soft ( -ary) ML syndrome decoding, also called generalized Wagner rule [8]. The weight used for the coset-search (6) is the counterpart which is used for of the log likelihood function , then by (4), . Hence assumes code-search. If nonzero values corresponding to the nonzero only deviations from the th hard decision , whereas has values for a given . , then by (6), , hence . Thus no If search is required at all in case the syndrome happens to be zero. For a convolutional code, this is rather unlikely to occur unless a short truncation length is considered. Nonetheless, portions of the syndrome are zero and, depending on the noise level of the channel, they can be quite lengthy. This observation is the key to a powerful method to reduce the average computational complexity of soft ML decoding, as in discussed in [1]. To accomplish the search through , we shall use an error-trellis. A flowchart of case that the syndrome decoding procedure is provided in Fig. 1. III. EXPEDITED ML COSET SEARCH A. Error-Trellis Given a code-trellis for the coset trellis

for the code , one can obtain a of by adding to the output

ARIEL AND SNYDERS: ERROR TRELLISES FOR CONVOLUTIONAL CODES—II

symbols along the paths of . Then, by changing the weights from to , associated with the symbols of obtained is suitable to carry out the search (6) the trellis has for . However, the relabeling procedure that produces as the serious drawbacks. It produces as many trellises , that describe the same number of coset members, i.e., . Furthermore, the identity of the various states coset in a manner which is unknown in advance, changes along trellises are stored in the memory. unless all for Consequently, it is impractical to apply the trellis implementing the shortcuts, expressed by Algorithms 1–4, which make the syndrome decoding approach advantageous. may be useful to understand Nonetheless, the concept of the essentials of the decoding procedure, without studying in for detail [1]. A syndrome-based construction of a trellis which is better suited to our purpose is presented in [1]. is named error-trellis. For distinction, of (1) is the basis of The following partitioned form of the construction of [1]

1017

4) Unlike the equiprobable paths (codewords) through , have (usually) difthe paths (error vectors) through ferent a priori probabilities. Consequently, for any given usually has many redundant paths that syndrome , can be deleted in advance, i.e., prior to taking into account their weights. 5) In the module corresponding to a zero-syndrome seg, there exists a zero-weight branch conment with the sink state necting the source state . We shall also use the following notations. Let be the collection of all (parallel) subpaths of starting at state and terminating at state . The sum is of the weights of the error symbols along a subpath of . The weight of a state is defined called the weight wt wt . Obviously, the by wt least weighing error vector is the one associated with the satisfying path wt

(9)

For any matrix , let columns of . Denote

be the number of nonzero

wt .. .

(7)

(10) Here, each has rows, where is an integer multiple of which is determined by the designer, the last rows of constitute , and . The syndrome is segmented in accordance with the partitioning of , i.e., , where stands for the last symbols of . The trellis has sections. Each section describes a portion of length (8) . Similarly to , the of the error vectors belonging to th section of has source states , and sink states , where the collection of states may be viewed as a space of -tuples with dimension . One of the . starts states of any trellis section is the zero state and terminates with a single source state . with a single terminal state and a conThe distinctions between an error-trellis are summarized in the following ventional code-trellis statements, referred to in [1] as Properties 1–5. describes , whereas describes . 1) have the same structure, whereas 2) All the sections of a section of is drawn from a collection of distinct modules. This collection is identical for all the sections. is the set of the 3) The label assigned to a branch of nonzero deviations from the symbols of or, instead, the set of the associated nonzero weights. Consequently, the cardinality of the labels is not fixed; it is at most and can even be zero. In contrast, all the have the same cardinality, namely branch labels of for an ordinary .

. Then where is given by (8) and is the set of positions of the nonzero columns of . Let be the matrix obtained by puncturing the columns of at all the positions belonging to . Then the number of parallel branches between any pair of source and sink states of section of is given by [1] (11) B. Covering Subpaths starting at the same depth Consider two subpaths and terminating at (not necessarily at the same state) of and , the same depth, with associated error symbols if implies . respectively. We shall write we mean and . Evidently, the By represents an ordering by weight, i.e., relation implies wt wt . Note that does not exclude wt . wt is called a covering subpath if there exists a A subpath start at the same state and parallel subpath (i.e., and . All covering terminate at the same state) such that subpaths can be deleted from the error-trellis without affecting the outcome of the ML decoding. , the error-trellis resembles a codeIn case that trellis, with the same module selected for all its sections. In other than the this case, however, any is a covering path. Accordingly, path corresponding to the error-trellis reduces to a single path, and the most likely codeword is known to be . (No similar behavior is displayed by a code-trellis.) Under practical circumstances, covering subpaths exist in also if . Their efficient a priori elimination requires, however, that a covering subpath and (at least) one subpath

1018

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 47, NO. 7, JULY 1999

covered by it differ in positions that are confined to a single section, i.e., a module. The elimination of covering parallel branches is addressed in Algorithms 1 and 2 in [1]. If is set to be small, parallel branches within a section are unlikely to exist. Fortunately, deletion of branches from can also be accomplished on the basis of sorting procedures (see Algorithms 3 and 4). We proceed to describe the four methods of simplifying error-trellises. The first three of these are aimed at reducing the worst case computational complexity of decoding, whereas the fourth method is employed to reduce the average complexity. Efficient implementation of all these methods requires the knowledge of the exact structure of the modules, i.e., the building blocks that make up the trellis, as indeed guaranteed by the syndrome-based construction. C. Elimination of Parallel Branches Parallel branches exist in an error-trellis when is set to be sufficiently large. Evidently, sets of parallel branches can be processed simultaneously. Furthermore, throughout of an increase of entails a decrease of the number trellis sections. However, the number of branches leaving or entering a state increases exponentially with . This results in an increase in the computational complexity, which tends to . To outweigh the gain obtained from the reduction of counteract such increase, we propose two methods to eliminate parallel branches. These methods prove to be efficient particularly for high-rate convolutional codes. Algorithm 1—Elimination of Covering Branches: Given a of parallel branches, eliminate any set satisfying for some . branch The elimination of covering branches is a stage in the design of the trellis, rather than the decoding. The reduced modules are used to construct the (reduced) error-trellis in the same fashion as the original modules are joined to create , i.e., the modularity of the construction is not impaired by applying Algorithm 1. Example 1: Consider the rate 2/3 code specified by the polynomial check matrix

The corresponding scalar check matrix is given by

..

of a section. Consider the module for the syndrome segment . The four sets of parallel branches are given by

where each branch is represented by the set of nonzero error stands for a branch with symbols associated with it, and for no nonzero error symbols. As the code is binary, . Evidently, all but the first members of all and , as well as the last four members of and , are covering branches. These 22 branches can be deleted from the module. An elimination of an identical extent is achievable for the remaining three modules that correspond to the other values of a syndrome segment. Another method to eliminate parallel branches is based on (1) has a span length of the observation that a column of . (The span of a vector is the set at most of consecutive positions starting with the first and ending with the last nonzero entries of , and its cardinality is called span length.) Hence, for sufficiently large values of , is likely to have columns whose span in confined to the boundaries of a (7) of rows, i.e., to the discrete interval single submatrix . Such columns will be called free columns and can be treated separately from the others; the number of states of the trellis is unaffected by puncturing the free columns [1]. the set of the positions of the free Denote by is related to by a columns of . Clearly, (8) positions. Let be the matrix whose shift of columns are the free columns of , arranged in some arbitrary [ is order which is fixed for all and assume that independent of ]. Denote . Then is a check matrix of rank with a block code (12)

.

A section of for any coset of the code has two source states . Set . Then and two sink states, i.e.,

Thus, rank and . By (11), there are eight parallel branches between any pair of source and sink states

, then there exist parallel codewords. Obviously, if . Furthermore, if , then branches, i.e., parallel branches between any pair the collection of the and of source and sink states of the section parallel can be described as the concatenation of and some intermediate node branches between with branches between and [see Example 2, Fig. 2(b)]. The labels of the branches and , at a given section of the error-trellis, between . are the members of a coset of

ARIEL AND SNYDERS: ERROR TRELLISES FOR CONVOLUTIONAL CODES—II

1019

(a)

(b)

(c)

Fig. 2. Trellis sections for Example 2: (a) a reduced section of the code-trellis, with dashed lines indicating zero-weight (log-likelihood ratio) transitions, (b) a module of T , and (c) a reduced module of T for the case s j (00)t and c a ; b , where a = 6j 03 , b = min 6j 05 ; 6j 04 , and c = min 6j 02 ; 6j 01 .

f

g

=

Algorithm 2—Elimination of Parallel Branches Via Block Decoding: of . Step 1) Form the check matrix perform the followStep 2) For each ing procedure: identify the least weighing member , using the weights of each coset of . Eliminate the branches corresponding to the remaining coset members. The efficiency of Algorithm 2 is obviously determined by the efficiency of the soft syndrome decoder employed for the . To facilitate the explanation, a simple examdecoding ple will be used. The same principles apply straightforwardly to any convolutional code and any (permitted) value of . Example 2: Consider again the two-state trellis of Exam. Then the submatrix consists of ple 1, for which th and th rows of . The free columns of the are located at positions , and . We have



f

g

branches is reduced into a single branch. Straightforward implementation of this procedure requires 81 additions per section. In comparison, decoding of an equivalent section of length six of an ordinary code-trellis consumes 164 additions. However, with the judiciously selected code-trellis of Fig. 2(a) and employment of the Gray code [17], the complexity is reduced to 25 additions per section. Nonetheless, employing the error-trellis is preferable since its processing can also be greatly improved. To achieve this improvement, the reduced-list method of [7] is applied to decoding jointly all the cosets of . This operation requires only four additions and replaces the module of Fig. 2(b) by a reduced module. Such reduced module is presented by . In some cases, the reduced Fig. 2(c) for the case module has no parallel branches. The complexity of decoding by this method totals only 9 or 11 additions per section, on average about ten additions per section.

D. Sorting State Weights Thus rank , and . . [In fact, for this Hence by (11) and (12), for all .] Accordingly, any set of code eight parallel branches can be represented as the concatenation ] of eight parallel branches with a single [ or , as depicted in branch whose label is either Fig. 2(b). By identifying the least weighing member within , each of the four sets of the eight parallel each coset of

In this subsection, elimination of nonparallel branches is addressed. This elimination requires the sorting of the weights of the states. Hence, in contrast to Algorithm 1, it is part of the decoding rather than the design of the decoder. Algorithm 3—Elimination Via Sorting State Weights: Consider a set of (not necessarily parallel) branches leading to the same sink state of the th section of . Step 1) Sort the weights of the source states of the members of .

1020

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 47, NO. 7, JULY 1999

(a)

(b)

(c)

(d)

Fig. 3. Butterflies for Example 3: (a) a nonreduced butterfly, (b) reduced butterfly for wt(0 ) and (d) reduced butterfly for wt(0 ) = wt(1 ).

Step 2) Eliminate any branch wt

wt

satisfying and

for some , where and are the source states of and , respectively. The procedure to save computations by applying Algorithm 3 is naturally suited to our coset-search approach. Indeed, the idea was employed already for an example in [2] without explicit formulation of the elimination rule. A similar procedure with respect to (the code-search) Viterbi decoding has been established in [18]. Simplification of the error-trellis based on Algorithm 3 is demonstrated by the following three examples. Example 3: Consider any rate 1/2 binary convolutional code with its scalar check matrix partitioned into submatrices rows. Our construction yields an error-trellis with the of property that in each section, the weight of any branch is one , and , where and are the positions of of the two associated error bits [1]. Furthermore, exactly one half of the “butterflies” of any section have the structure exhibited by Fig. 3(a). Apart from calculating the branch weight , an operation performed only once per section, this type of butterfly can be processed with at most three additions, whereas six additions are required for an equivalent butterfly of a conventional code-trellis. The reduction of the complexity as follows. If relies on the relations wt , then wt wt and the butterfly wt reduces to that depicted in Fig. 3(b). Fig. 3(c) corresponds to wt . Finally, in case that the symmetrical case wt wt , then wt wt wt wt , wt yielding the simplified butterfly of Fig. 3(d). For some codes, both Algorithms 2 and 3 are applicable, as shown in the following example. Example 4: Consider again the code of Example 1 but . The modules for the two possible values choose now and , are of a syndrome segment, namely exhibited by Fig. 4(c) and (d), respectively. By comparing with , these modules reduce to those depicted in Fig. 4(e) . Then, a and (f), respectively, where comparison between the weights of the source states of each module specifies, as in Example 3, one of the further reduced modules of Fig. 4(g)–(j). Consequently, the decoding can be accomplished by only five additions. In contrast, the decoding of an equivalent section of a code-trellis, depicted in Fig. 4(a), requires ten additions with the aid of the apparently most




wt(1 ),

efficient method, which employs the modified trellis section of Fig. 4(b). The following example demonstrates the application of Algorithm 3 to a four-state butterfly. Such butterflies exist in error-trellises for the binary codes with , codes with (provided ), and binary codes with . quaternary Example 5: Consider the rate 2/3 code with parity check matrix

and scalar check matrix

..

.

Let

. The two modules corresponding to and are exhibited by Fig. 5(a) and (b), respectively, where , and . Also, and stand for and , respectively. To demonstrate the simplification, consider the module of Fig. 5(b), and assume that a partial sorting of the weights of the source states yields wt

wt

wt

wt

Then wt

wt

wt

wt

wt

wt

wt

wt

wt

wt

and wt

wt

wt

Accordingly, seven branches are deleted from the module, with the result being displayed by Fig. 5(c). An elimination of the outcomes of the sorting same extent applies for all and for both modules. Hence the decoding of the section requires only 18 additions, including four for the sorting and . For comparison, two for calculating the weight decoding an equivalent section of a code-trellis requires 32 addition.

ARIEL AND SNYDERS: ERROR TRELLISES FOR CONVOLUTIONAL CODES—II

1021

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j)

=

0, (d) Fig. 4. Modules for the code of Example 4: (a) a section of a code-trellis, (b) the modified code-trellis section, (c) the module of T for s i the module of T for s i = 1, (e) the module for s i = 0 with 0 = min a ; b , (f) the module for si = 1 with 0 = min a ; b , (g) and (h) the reduced modules with 0 = min a ; b and wt(0 ) wt(1 ) for si = 0 and s i = 1, respectively, (i) and (j) the reduced modules with 0 = min a ; b and wt(1 ) wt(0 ) for si = 0 and s i = 1, respectively.

f

g

f



g



(a) Fig. 5. Modules for wt(0 ) wt(1 )



T 

f

(b)

g

f

g

(c)

of Example 5: (a) the module for s i = 0, (b) the module for s i = 1, and (c) the reduced module for si = 1 in case that wt(2 ); wt(3 ).

E. Identification of Optimal States A state will be called optimal if it lies on the least-weighing (9) through the trellis . If a state is error path recognized to be optimal, then evidently all the other states and all paths passing through them can be and be optimal states, where . deleted. Let to is necessarily Then the portion of from depths

a noncovering member of . In particular, if and are optimal and

then, by Property 5, these states are connected by an allzero (zero weight) subpath. Hence, all other subpaths between and are covering subpaths and can be deleted. depths

1022

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 47, NO. 7, JULY 1999

(a)

(b) Fig. 6. The error-trellis of Example 6: (a) a portion of T and (b) the degenerated portion of T , obtained by applying Algorithm 4.

The decoder will attempt to identify optimal states on the basis of the following observation. Due to the short span length of the columns of , the syndrome has considerably more zero than nonzero segments under low to moderate noise level. Also, due to the structure of , a long string of zero syndrome segments indicates that the corresponding long portion of is likely to be error-free. and The procedure of identifying optimal zero states of subsequent deletion of all but the all-zero path connecting those states is called trellis degeneration. This procedure can be applied simultaneously to several portions of the trellis. Remarkably, ML decoding aided by trellis degeneration has an average computational complexity which decreases with increasing signal-to-noise ratio, unlike the Viterbi algorithm whose computational complexity is fixed. We proceed to describe the trellis degeneration in more detail. , a Given some syndrome will be called a zero string and denoted discrete interval if for all . For , any two and are connected by a consecutive zero states is constant throughout zero-weight branch, whereby wt . Consequently, if wt

wt

for all

(13)

at some depth satisfied for all , Let

and

, then (13) is guaranteed to be . . Assume that

wt

wt

for all

(14)

, where wt and for each is the weight of the subpath connecting and . Assume . Then, obviously, further that (13) is satisfied and that and are optimal states. both Algorithm 4—Trellis Degeneration Based on Optimal States: , decode forward the Step 1) Given a zero string error-trellis starting with the th section. For each compute wt wt for all

. Stop the forward decoding at depth where is the smallest integer for which (13) is satisfied. Step 2) Starting with the th section, decode backward the where error-trellis until depth is the largest integer for which (14) is satisfied. , then delete all the subpaths between Step 3) If and except for the all-zero (i.e., depths error-free) subpath. , then Step 1 is redundant and the first Note that if sections of the trellis are eliminated, whereas if , sections are then Step 2 is redundant and the last deleted. . Example 6: Consider the code of Example 4 with and Assume that ( ). Thus, with is a is zero string of length ten. The corresponding portion of depicted in Fig. 6(a). Assume, for instance, that wt

wt

wt

wt

wt

wt

and

Then and are optimal states. Hence the sections at depths can be replaced with a of single error-free path, as illustrated by Fig. 6(b). IV. SIMULATION RESULTS The computational complexities of decoding convolutional codes based on error-trellis and the Viterbi algorithm (codetrellis decoding) were compared using computer simulation. BPSK signaling over additive white Gaussian noise channel and truncation length 300 are assumed throughout this section. The complexity is stated in terms of the average number of real additions required to decode one information bit. Fig. 7

ARIEL AND SNYDERS: ERROR TRELLISES FOR CONVOLUTIONAL CODES—II

1023

Fig. 7. Average computational complexity versus Eb =N0 for the code-trellis and error-trellis decoders of Example 2. Fig. 9. The effect of increasing q on the proportional computational complexity P .

Fig. 8. Average computational complexity versus Eb =N0 for code-trellis and error-trellis decoders for the GSM halfrate vocoder convolutional code.

displays the complexity of the two decoders as a function of for the rate 2/3 two-state code of Example 2, where is the ratio of information bit energy to the power spectral density of noise and A1–A4 stand for Algorithms 1–4. A similar comparison for the rate 1/2 64-state GSM halfrate vocoder convolutional code is depicted in Fig. 8. Algorithms 1–3 affect the worst case complexity, hence their performances are essentially independent of the signal-to-noise ratio. The moderate reduction in the average complexity with is explained by the increasing frequency of increasing receiving an all-zero syndrome. The application of Algorithm 4, on the other hand, results in a sharp reduction of complexity with increasing signal-to-noise ratio. The reason for this reduction is the following. As the SNR increases, the zero strings become less numerous and longer. Also, both the forward and

backward decoding of a given zero string tend to conclude [i.e., to fulfill conditions (13) and (14)] after a few steps. Therefore, larger portions of the error-trellis become degenerated. The simulation results demonstrate the significant improvement in complexity that is achieved by the proposed decoder. of 6–7 dB, the ML errorNote, for example, that at trellis decoder is five to ten times more efficient, in terms of the average complexity, than the Viterbi decoder. These gains are typical for binary convolutional codes. Results of computer simulations of decoding nonbinary codes are exhibited by is the proportion of the average complexity Fig. 9, where of the error-trellis decoder to the complexity of the Viterbi decoder. These results indicate that the reduction in the worst case complexity due to Algorithm 3 becomes less noticeable with increasing alphabet size . For large , zero-weight branches are scarce, hence covering branches are more difficult to identify. The efficiency of Algorithm 4 has no noticeable relation to . The effect of the design parameter on the complexity of Algorithms 1–4 for the rate 2/3 binary code of Example 2 is presented in Fig. 10, where is the proportional complexity as defined earlier. The results of Fig. 10 suggest that, although the branch count increases exponentially with , it may be when Algorithms 1 and 2 are advantageous to consider incorporated in the decoder. V. CONCLUSION A syndrome decoder of an -ary convolutional employs the symbol-by-symbol detected version code of the received word and , where the associated weights is an error sequence. The decoding scheme consists of the following steps. Given a scalar check matrix

1024

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 47, NO. 7, JULY 1999

Fig. 10. The effect of the design parameter tional complexity P .

l on the proportional computa-

of the code, the syndrome vector corresponding to is is calculated. Then an error-trellis diagram representing constructed. This is carried out by adjoining building blocks, called modules, where each module is selected on the basis of a segment of obtained at the output of the syndrome calculator. The number of the various modules is , where the parameter of design is only required to be a multiple of . The main part of the decoding is a search through the trellis for the least weighing error vector . Finally, the most is obtained by subtracting from . likely codeword In this paper we presented ways to efficiently perform the search through the error-trellis. It was shown that the relative irregularity of the structure of error-trellises is in fact an advantage over (conventional) code-trellises in the implementation of soft ML decoding. Four elimination algorithms were devised to expedite the search through the error-trellis. Namely, Algorithm 1 for the design-stage elimination of parallel branches, Algorithm 2 for the elimination of parallel branches via block decoding, Algorithm 3 for the elimination of branches via sorting of the weights of states, and Algorithm 4 for the elimination of trellis sections corresponding to zero syndrome segments. Decoders comprising one or more of these algorithms were shown in several examples to significantly outperform the best known versions of the Viterbi decoder. The reduction of the complexity was achieved without sacrificing the optimality of the decoding.

[3] J. P. M. Schalkwijk and A. J. Vinck, “Syndrome decoding of convolutional codes,” IEEE Trans. Commun., vol. COM-23, pp. 789–792, July 1975. [4] Y. Berger and Y. Be’ery, “Soft trellis-based decoder for linear block codes,” IEEE Trans. Inform. Theory, vol. 40, pp. 203–209, May 1994. [5] H. Miyakawa and T. Kaneko, “Decoding algorithm of error-correcting codes by use of analog weights,” Electrron. Commun. Japan, vol. 58-A, no. 1, pp. 18–27, Jan. 1975. [6] J. Snyders and Y. Be’ery, “Maximum likelihood soft decoding of linear block codes and decoders for the Golay codes,” IEEE Trans. Inform. Theory, vol. 35, pp. 963–975, Sept. 1989. [7] J. Snyders, “Reduced lists of error patterns for maximum likelihood soft decoding,” IEEE Trans. Inform. Theory, vol. 37, pp. 1194–1200, July 1991. [8] J. Snyders and Y. Be’ery, “An approach to maximum likelihood decoding of q -ary block codes,” in Proc. Conf. Information Science Systems, Baltimore, MD, 1989, pp. 206–208. [9] N. J. C. Lous, P. A. H. Bours, and H. C. A. van Tilborg, “On maximum likelihood soft decision decoding of binary linear block codes,” IEEE Trans. Inform. Theory, vol. 39, pp. 197–203, Jan. 1993. [10] M. Ran and J. Snyders, “Constrained designs for maximum likelihood soft decoding of RM(2, m) and the extended Golay codes,” IEEE Trans. Commun., vol. 43, pp. 812–820, 1995. [11] Y. S. Han, C. R. P. Hartmann, and C. C. Chen, “Efficient priority first search maximum-likelihood soft decision decoding of linear block codes,” IEEE Trans. Inform. Theory, vol. 39, pp. 1514–1523, Sept. 1993. [12] M. P. C. Fossorier and S. Lin, “Complementary reliability-based decodings of binary block codes,” IEEE Trans. Inform. Theory, vol. 43, pp. 1667–1672, Sept. 1997. [13] M. P. C. Fossorier, S. Lin, and J. Snyders, “Reliability-based syndrome decoding of linear block codes,” IEEE Trans. Inform. Theory, vol. 44, pp. 388–398, Jan. 1998. [14] G. D. Forney, Jr., “Coset codes—Part II: Binary lattices and related codes,” IEEE Trans. Inform. Theory, vol. IT-34, pp. 1152–1187, Sept. 1988. [15] V. Sidorenko and V. Zyablov, “Decoding of convolutional codes using a syndrome trellis,” IEEE Trans. Inform. Theory, vol. 40, pp. 1663–1666, Sept. 1994. [16] M. P. C. Fossorier and S. Lin, “Coset codes viewed as terminated convolutional codes,” IEEE Trans. Commun., vol. 44, pp. 1096–1106, Sept. 1996. [17] J. H. Conway and N. J. A. Sloane, “Soft decoding techniques for codes and lattices, including the Golay code and the Leech lattice,” IEEE Trans. Inform. Theory, vol. IT-32, pp. 41–50, Jan. 1986. [18] M. P. C. Fossorier and S. Lin, “Differential trellis decoding of convolutional codes,” preprint.

Meir Ariel received the B.Sc. and M.Sc. degrees (with honors) in electrical engineering from Tel Aviv University, Tel Aviv, Israel, in 1988 and 1994, respectively. He is currently working toward the Ph.D. degree at Tel Aviv University. During 1989–1994, he was with Z. M. M., Givatayim, Israel, and with the Department of Electrical Engineering–Systems at Tel Aviv University. Since 1995, he has been a Consultant to Motorola Inc. His research interests include decoding techniques for error-correcting codes, combined modulation and coding, and data compression.

ACKNOWLEDGMENT

Jakov Snyders received the B.Sc., M.Sc., and D.Sc. degrees in electrical engineering from the Technion, Israel Institute of Technology, in 1966, 1969, and 1972, respectively. Since 1974, he has been with Tel Aviv University, Tel Aviv, Israel. He held visiting positions at the University of Toronto, the University of California–Los Angeles, McGill University, and the Rensselaer Polytechnic Institute. His research interests include error control coding, spread spectrum communication, multiple user information theory,

The authors wish to thank the anonymous reviewers for their valuable comments and suggestions. REFERENCES [1] M. Ariel and J. Snyders, “Error-trellises for convolutional codes—Part I: Construction,” IEEE Trans. Commun., vol. 46, pp. 1592–1601, Dec. 1998. [2] , “Soft syndrome decoding of binary convolutional codes,” IEEE Trans. Commun., vol. 43, pp. 288–297, Feb./Apr. 1995.

and stochastic processes.