All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
...

[Lin, Costello] Error Control Coding, 2E, Solutions Manual

by midoriinu

on

Report

Category:

Documents

Download: 11

Comment: 0

54

views

Comments

Description

Solutions manual for "Error control coding", (Lin, Costello), second edition
Download [Lin, Costello] Error Control Coding, 2E, Solutions Manual

Transcript

  • Chapter 2 2.3 Since m is not a prime, it can be factored as the product of two integers a and b, m = a · b with 1 < a, b < m. It is clear that both a and b are in the set {1, 2, · · · ,m − 1}. It follows from the definition of modulo-m multiplication that a¡ b = 0. Since 0 is not an element in the set {1, 2, · · · ,m−1}, the set is not closed under the modulo-m multiplication and hence can not be a group. 2.5 It follows from Problem 2.3 that, if m is not a prime, the set {1, 2, · · · ,m − 1} can not be a group under the modulo-m multiplication. Consequently, the set {0, 1, 2, · · · ,m− 1} can not be a field under the modulo-m addition and multiplication. 2.7 First we note that the set of sums of unit element contains the zero element 0. For any 1 ≤ ` < λ, ∑` i=1 1 + λ−∑` i=1 1 = λ∑ i=1 1 = 0. Hence every sum has an inverse with respect to the addition operation of the field GF(q). Since the sums are elements in GF(q), they must satisfy the associative and commutative laws with respect to the addition operation of GF(q). Therefore, the sums form a commutative group under the addition of GF(q). Next we note that the sums contain the unit element 1 of GF(q). For each nonzero sum ∑` i=1 1 with 1 ≤ ` < λ, we want to show it has a multiplicative inverse with respect to the multipli- cation operation of GF(q). Since λ is prime, ` and λ are relatively prime and there exist two 1
  • integers a and b such that a · `+ b · λ = 1, (1) where a and λ are also relatively prime. Dividing a by λ, we obtain a = kλ+ r with 0 ≤ r < λ. (2) Since a and λ are relatively prime, r 6= 0. Hence 1 ≤ r < λ Combining (1) and (2), we have ` · r = −(b+ k`) · λ+ 1 Consider ∑` i=1 1 · r∑ i=1 1 = `·r∑ i=1 1 = −(b+k`)·λ∑ i=1 +1 = ( λ∑ i=1 1)( −(b+k`)∑ i=1 1) + 1 = 0 + 1 = 1. Hence, every nonzero sum has an inverse with respect to the multiplication operation of GF(q). Since the nonzero sums are elements of GF(q), they obey the associative and commutative laws with respect to the multiplication of GF(q). Also the sums satisfy the distributive law. As a result, the sums form a field, a subfield of GF(q). 2.8 Consider the finite field GF(q). Let n be the maximum order of the nonzero elements of GF(q) and let α be an element of order n. It follows from Theorem 2.9 that n divides q − 1, i.e. q − 1 = k · n. Thus n ≤ q − 1. Let β be any other nonzero element in GF(q) and let e be the order of β. 2
  • Suppose that e does not divide n. Let (n, e) be the greatest common factor of n and e. Then e/(n, e) and n are relatively prime. Consider the element β(n,e) This element has order e/(n, e). The element αβ(n,e) has order ne/(n, e) which is greater than n. This contradicts the fact that n is the maximum order of nonzero elements in GF(q). Hence e must divide n. Therefore, the order of each nonzero element of GF(q) is a factor of n. This implies that each nonzero element of GF(q) is a root of the polynomial Xn − 1. Consequently, q − 1 ≤ n. Since n ≤ q − 1 (by Theorem 2.9), we must have n = q − 1. Thus the maximum order of nonzero elements in GF(q) is q-1. The elements of order q − 1 are then primitive elements. 2.11 (a) Suppose that f(X) is irreducible but its reciprocal f ∗(X) is not. Then f ∗(X) = a(X) · b(X) where the degrees of a(X) and b(X) are nonzero. Let k and m be the degrees of a(X) and b(X) respectivly. Clearly, k +m = n. Since the reciprocal of f ∗(X) is f(X), f(X) = Xnf ∗( 1 X ) = Xka( 1 X ) ·Xmb( 1 X ). This says that f(X) is not irreducible and is a contradiction to the hypothesis. Hence f ∗(X) must be irreducible. Similarly, we can prove that if f ∗(X) is irreducible, f(X) is also irreducible. Consequently, f ∗(X) is irreducible if and only if f(X) is irreducible. 3
  • (b) Suppose that f(X) is primitive but f ∗(X) is not. Then there exists a positive integer k less than 2n − 1 such that f ∗(X) divides Xk + 1. Let Xk + 1 = f ∗(X)q(X). Taking the reciprocals of both sides of the above equality, we have Xk + 1 = Xkf ∗( 1 X )q( 1 X ) = Xnf ∗( 1 X ) ·Xk−nq( 1 X ) = f(X) ·Xk−nq( 1 X ). This implies that f(X) divides Xk + 1 with k < 2n − 1. This is a contradiction to the hypothesis that f(X) is primitive. Hencef ∗(X) must be also primitive. Similarly, if f ∗(X) is primitive, f(X) must also be primitive. Consequently f ∗(X) is primitive if and only if f(X) is primitive. 2.15 We only need to show that β, β2, · · · , β2e−1 are distinct. Suppose that β2 i = β2 j for 0 ≤ i, j < e and i < j. Then, (β2 j−i−1)2 i = 1. Since the order β is a factor of 2m − 1, it must be odd. For (β2j−i−1)2i = 1, we must have β2 j−i−1 = 1. Since both i and j are less than e, j − i < e. This is contradiction to the fact that the e is the smallest nonnegative integer such that β2 e−1 = 1. 4
  • Hence β2i 6= β2j for 0 ≤ i, j < e. 2.16 Let n′ be the order of β2i . Then (β2 i )n ′ = 1 Hence (βn ′ )2 i = 1. (1) Since the order n of β is odd, n and 2i are relatively prime. From(1), we see that n divides n′ and n′ = kn. (2) Now consider (β2 i )n = (βn)2 i = 1 This implies that n′ (the order of β2i) divides n. Hence n = `n′ (3) From (2) and (3), we conclude that n′ = n. 2.20 Note that c ·v = c · (0+v) = c ·0+ c ·v. Adding−(c ·v) to both sides of the above equality, we have c · v + [−(c · v)] = c · 0+ c · v + [−(c · v)] 0 = c · 0+ 0. Since 0 is the additive identity of the vector space, we then have c · 0 = 0. 2.21 Note that 0 · v = 0. Then for any c in F , (−c+ c) · v = 0 5
  • (−c) · v + c · v = 0. Hence (−c) · v is the additive inverse of c · v, i.e. −(c · v) = (−c) · v (1) Since c · 0 = 0 (problem 2.20), c · (−v + v) = 0 c · (−v) + c · v = 0. Hence c · (−v) is the additive inverse of c · v, i.e. −(c · v) = c · (−v) (2) From (1) and (2), we obtain −(c · v) = (−c) · v = c · (−v) 2.22 By Theorem 2.22, S is a subspace if (i) for any u and v in S, u + v is in S and (ii) for any c in F and u in S, c · u is in S. The first condition is now given, we only have to show that the second condition is implied by the first condition for F = GF (2). Let u be any element in S. It follows from the given condition that u+ u = 0 is also in S. Let c be an element in GF(2). Then, for any u in S, c · u =  0 for c = 0u for c = 1 Clearly c · u is also in S. Hence S is a subspace. 2.24 If the elements of GF(2m) are represented by m-tuples over GF(2), the proof that GF(2m) is 6
  • a vector space over GF(2) is then straight-forward. 2.27 Let u and v be any two elements in S1 ∩ S2. It is clear the u and v are elements in S1, and u and v are elements in S2. Since S1 and S2 are subspaces, u+ v ∈ S1 and u+ v ∈ S2. Hence,u + v is in S1 ∩ S2. Now let x be any vector in S1 ∩ S2. Then x ∈ S1, and x ∈ S2. Again, since S1 and S2 are subspaces, for any c in the field F , c · x is in S1 and also in S2. Hence c · v is in the intersection, S1 ∩ S2. It follows from Theorem 2.22 that S1 ∩ S2 is a subspace. 7
  • Chapter 3 3.1 The generator and parity-check matrices are: G =  0 1 1 1 1 0 0 0 1 1 1 0 0 1 0 0 1 1 0 1 0 0 1 0 1 0 1 1 0 0 0 1  H =  1 0 0 0 0 1 1 1 0 1 0 0 1 1 1 0 0 0 1 0 1 1 0 1 0 0 0 1 1 0 1 1  From the parity-check matrix we see that each column contains odd number of ones, and no two columns are alike. Thus no two columns sum to zero and any three columns sum to a 4- tuple with odd number of ones. However, the first, the second, the third and the sixth columns sum to zero. Therefore, the minimum distance of the code is 4. 3.4 (a) The matrixH1 is an (n−k+1)×(n+1) matrix. First we note that the n−k rows ofH are linearly independent. It is clear that the first (n− k) rows of H1 are also linearly independent. The last row of H1 has a ′′1′′ at its first position but other rows of H1 have a ′′0′′ at their first position. Any linear combination including the last row of H1 will never yield a zero vector. Thus all the rows of H1 are linearly independent. Hence the row space of H1 has dimension n− k + 1. The dimension of its null space, C1, is then equal to dim(C1) = (n+ 1)− (n− k + 1) = k Hence C1 is an (n+ 1, k) linear code. (b) Note that the last row of H1 is an all-one vector. The inner product of a vector with odd weight and the all-one vector is ′′1′′. Hence, for any odd weight vector v, v ·HT1 6= 0 and v cannot be a code word in C1. Therefore, C1 consists of only even-weight code words. (c) Let v be a code word in C. Then v ·HT = 0. Extend v by adding a digit v∞ to its left. 8
  • This results in a vector of n+ 1 digits, v1 = (v∞,v) = (v∞, v0, v1, · · · , vn−1). For v1 to be a vector in C1, we must require that v1H T 1 = 0. First we note that the inner product of v1 with any of the first n−k rows of H1 is 0. The inner product of v1 with the last row of H1 is v∞ + v0 + v1 + · · ·+ vn−1. For this sum to be zero, we must require that v∞ = 1 if the vector v has odd weight and v∞ = 0 if the vector v has even weight. Therefore, any vector v1 formed as above is a code word in C1, there are 2k such code words. The dimension of C1 is k, these 2k code words are all the code words of C1. 3.5 Let Ce be the set of code words in C with even weight and let Co be the set of code words in C with odd weight. Let x be any odd-weight code vector from Co. Adding x to each vector in Co, we obtain a set of C ′e of even weight vector. The number of vectors in C ′e is equal to the number of vectors in Co, i.e. |C ′e| = |Co|. Also C ′e ⊆ Ce. Thus, |Co| ≤ |Ce| (1) Now adding x to each vector in Ce, we obtain a set C ′o of odd weight code words. The number of vectors in C ′o is equal to the number of vectors in Ce and C ′o ⊆ Co Hence |Ce| ≤ |Co| (2) From (1) and (2), we conclude that |Co| = |Ce|. 9
  • 3.6 (a) From the given condition on G, we see that, for any digit position, there is a row in G with a nonzero component at that position. This row is a code word in C. Hence in the code array, each column contains at least one nonzero entry. Therefore no column in the code array contains only zeros. (b) Consider the `-th column of the code array. From part (a) we see that this column contains at least one ′′1′′. Let S0 be the code words with a ′′0′′ at the `-th position and S1 be the codewords with a ′′1′′ at the `-th position. Let x be a code word from S1. Adding x to each vector in S0, we obtain a set S ′1 of code words with a ′′1′′ at the `-th position. Clearly, |S ′1| = |S0| (1) and S ′1 ⊆ S1. (2) Adding x to each vector in S1, we obtain a set of S ′0 of code words with a ′′0′′ at the `-th location. We see that |S ′0| = |S1| (3) and S ′0 ⊆ S0. (4) From (1) and (2), we obtain |S0| ≤ |S1|. (5) From (3) and (4) ,we obtain |S1| ≤ |S0|. (6) From (5) and (6) we have |S0| = |S1|. This implies that the `-th column of the code array consists 2k−1 zeros and 2k−1 ones. (c) Let S0 be the set of code words with a ′′0′′ at the `-th position. From part (b), we see that S0 consists of 2k−1 code words. Let x and y be any two code words in S0. The sum x + y also has a zero at the `-th location and hence is code word in S0. Therefore S0 is a subspace of the vector space of all n-tuples over GF(2). Since S0 is a subset of C, it is a subspace of C. The dimension of S0 is k − 1. 10
  • 3.7 Let x, y and z be any three n-tuples over GF(2). Note that d(x,y) = w(x+ y), d(y, z) = w(y + z), d(x, z) = w(x+ z). It is easy to see that w(u) + w(v) ≥ w(u+ v). (1) Let u = x+ y and v = y + z. It follows from (1) that w(x+ y) + w(y + z) ≥ w(x+ y + y + z) = w(x+ z). From the above inequality, we have d(x,y) + d(y, z) ≥ d(x, z). 3.8 From the given condition, we see that λ < bdmin−1 2 c. It follows from the theorem 3.5 that all the error patterns of λ or fewer errors can be used as coset leaders in a standard array. Hence, they are correctable. In order to show that any error pattern of ` or fewer errors is detectable, we need to show that no error pattern x of ` or fewer errors can be in the same coset as an error pattern y of λ or fewer errors. Suppose that x and y are in the same coset. Then x + y is a nonzero code word. The weight of this code word is w(x+ y) ≤ w(x) + w(y) ≤ `+ λ < dmin. This is impossible since the minimum weight of the code is dmin. Hence x and y are in different cosets. As a result, when x occurs, it will not be mistaken as y. Therefore x is detectable. 3.11 In a systematic linear code, every nonzero code vector has at least one nonzero component in its information section (i.e. the rightmost k positions). Hence a nonzero vector that consists of only zeros in its rightmost k position can not be a code word in any of the systematic code in Γ. 11
  • Now consider a nonzero vector v = (v0, v1, · · · , vn−1) with at least one nonzero component in its k rightmost positions,say vn−k+i = 1 for 0 ≤ i < k. Consider a matrix of the following form which has v as its i-th row:  p00 p01 · · · p0,n−k−1 1 0 0 0 · · · 0 p10 p11 · · · p1,n−k−1 0 1 0 0 · · · 0 . . . . . . v0 v1 · · · vn−k−1 vn−k vn−k+1 · · · · · vn−1 pi+1,0 pi+1,1 · · · pi+1,n−k−1 0 0 · · 1 · · 0 . . . . . . pk−1,0 pk−1,1 · · · pk−1,n−k−1 0 0 0 0 · · · 1  By elementary row operations, we can put G into systematic form G1. The code generated by G1 contains v as a code word. Since each pij has 2 choices, 0 or 1, there are 2(k−1)(n−k) matrices G with v as the i-th row. Each can be put into systematic form G1 and each G1 generates a systematic code containing v as a code word. Hence v is contained in 2(k−1)(n−k) codes in Γ. 3.13 The generator matrix of the code is G = [P1 Ik P2 Ik] = [G1 G2] Hence a nonzero codeword in C is simply a cascade of a nonzero codeword v1 in C1 and a nonzero codeword v2 in C2, i.e., (v1,v2). Since w(v1) ≥ d1 and w(v2) ≥ d2, hence w[(v1,v2)] ≥ d1 + d2. 3.15 It follows from Theorem 3.5 that all the vectors of weight t or less can be used as coset leaders. There are ( n 0 ) + ( n 1 ) + · · ·+ ( n t ) 12
  • such vectors. Since there are 2n−k cosets, we must have 2n−k ≥ ( n 0 ) + ( n 1 ) + · · ·+ ( n t ) . Taking logarithm on both sides of the above inequality, we obtain the Hamming bound on t, n− k ≥ log2{1 + ( n 1 ) + · · ·+ ( n t ) }. 3.16 Arrange the 2k code words as a 2k × n array. From problem 6(b), each column of this code array contains 2k−1 zeros and 2k−1 ones. Thus the total number of ones in the array is n ·2k−1. Note that each nonzero code word has weight (ones) at least dmin. Hence (2k − 1) · dmin ≤ n · 2k−1 This implies that dmin ≤ n · 2 k−1 2k − 1 . 3.17 The number of nonzero vectors of length n and weight d− 1 or less is d−1∑ i=1 ( n i ) From the result of problem 3.11, each of these vectors is contained in at most 2(k−1)(n−k) linear systematic codes. Therefore there are at most M = 2(k−1)(n−k) d−1∑ i=1 ( n i ) linear systematic codes contain nonzero codewords of weight d− 1 or less. The total number of linear systematic codes is N = 2(k(n−k) If M < N , there exists at least one code with minimum weight at least d. M < N implies 13
  • that 2(k−1)(n−k) d−1∑ i=1 ( n i ) < 2k(n−k) d−1∑ i=1 ( n i ) < 2(n−k). 3.18 Let dmin be the smallest positive integer such that dmin−1∑ i=1 ( n i ) < 2(n−k) ≤ dmin∑ i=1 ( n i ) From problem 3.17, the first inequality garantees the existence of a systematic linear code with minimum distance dmin. 14
  • Chapter 4 4.1 A parity-check matrix for the (15, 11) Hamming code is H =  1 0 0 0 1 0 0 1 1 0 1 0 1 1 1 0 1 0 0 1 1 0 0 0 1 1 1 0 1 1 0 0 1 0 0 1 1 0 1 0 1 1 1 0 1 0 0 0 1 0 0 1 1 0 1 0 1 1 1 1  Let r = (r0, r1, . . . , r14) be the received vector. The syndrome of r is (s0, s1, s2, s3) with s0 = r0 + r4 + r7 + r8 + r10 + r12 + r13 + r14, s1 = r1 + r4 + r5 + r9 + r10 + r11 + r13 + r14, s2 = r2 + r5 + r6 + r8 + r10 + r11 + r12 + r14, s3 = r3 + r6 + r7 + r9 + r11 + r12 + r13 + r14. Set up the decoding table as Table 4.1. From the decoding table, we find that e0 = s0s¯1s¯2s¯3, e1 = s¯0s1s¯2s¯3, e2 = s¯0s¯1s2s¯3, e3 = s¯0s¯1s¯2s3, e4 = s0s1s¯2s¯3, e5 = s¯0s1s2s¯3, . . . , e13 = s0s1s¯2s3, e14 = s0s1s2s3. 15
  • Table 4.1: Decoding Table s0 s1 s2 s3 e0 e1 e2 e3 e4 e5 e6 e7 e8 e9 e10 e11 e12 e13 e14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 r 0 r 1 Buffer Register r 14 + ... s 0 + s 1 + s 2 + s 3 ... s 0 s 1 s 2 s 3 s 0 s 1 s 2 s 3 s 0 s 1 s 2 s 3 s 0 s 1 s 2 s 3 + e 0 r 0 + e 1 r 1 + e 2 r 2 + e 14 r 14 Decoded bits ... ... ... 16
  • 4.3 From (4.3), the probability of an undetected error for a Hamming code is Pu(E) = 2 −m{1 + (2m − 1)(1− 2p)2m−1} − (1− p)2m−1 = 2−m + (1− 2−m)(1− 2p)2m−1 − (1− p)2m−1. (1) Note that (1− p)2 ≥ 1− 2p, (2) and (1− 2−m) ≥ 0. (3) Using (2) and (3) in (1), we obtain the following inequality: Pu(E) ≤ 2−m + (1− 2−m) [ (1− p)2]2m−1 − (1− p)2m−1 = 2−m + (1− p)2m−1{(1− 2−m)(1− p)− 1} = 2−m − (1− p)2m−1{(1− (1− p)(1− 2−m)}. (4) Note that 0 ≤ 1− p ≤ 1 and 0 ≤ 1− 2−m < 1. Clearly 0 ≤ (1− p) · (1− 2−m) < 1, and 1− (1− p) · (1− 2−m) ≥ 0. (5) Since (1− p)2m−1 ≥ 0, it follows from (4) and (5) that Pu(E) ≤ 2−m. 4.6 The generator matrix for the 1st-order RM code RM(1,3) is G =  v0 v3 v2 v1  =  1 1 1 1 1 1 1 1 0 0 0 0 1 1 1 1 0 0 1 1 0 0 1 1 0 1 0 1 0 1 0 1  (1) The code generated by this matrix is a (8, 4) code with minimum distance 4. Let (a0, a3, a2, a1) 17
  • be the message to be encoded. Then its corresponding codeword is b = (b0, b1, b2, b3, b4, b5, b6, b7) = a0v0 + a3v3 + a2v2 + a1v1. (2) From (1) and (2), we find that b0 = a0, b1 = a0 + a1, b2 = a0 + a2, b3 = a0 + a2 + a1, b4 = a0 + a3, b5 = a0 + a3 + a1, b6 = a0 + a3 + a2, b7 = a0 + a3 + a2 + a1. (3) From (3), we find that a1 = b0 + b1 = b2 + b3 = b4 + b5 = b6 + b7, a2 = b0 + b2 = b1 + b3 = b4 + b6 = b5 + b7, a3 = b0 + b4 = b1 + b5 = b2 + b6 = b3 + b7, a0 = b0 = b1 + a1 = b2 + a2 = b3 + a2 + a1 = b4 + a3 = b5 + a3 + a1 = b6 + a3 + a2 = b7 + a3 + a2 + a1. Let r = (r0, r1, r2, r3, r4, r5, r6, r7) be the received vector. The check-sum for decoding a1, a2 and a3 are: (a1) (a2) (a3) A (0) 1 = r0 + r1 B (0) 1 = r0 + r2 C (0) 1 = r0 + r4 A (0) 2 = r2 + r3 B (0) 2 = r1 + r3 C (0) 2 = r1 + r5 A (0) 3 = r4 + r5 B (0) 3 = r4 + r6 C (0) 3 = r2 + r6 A (0) 4 = r6 + r7 B (0) 4 = r5 + r7 C (0) 4 = r3 + r7 18
  • After decoding a1, a2 and a3, we form r(1) = r− a1v1 − a2v2 − a3v3 = (r (1) 0 , r (1) 1 , r (1) 2 , r (1) 3 , r (1) 4 , r (1) 5 , r (1) 6 , r (1) 7 ). Then a0 is equal to the value taken by the majority of the bits in r(1). For decoding (01000101), the four check-sum for decoding a1, a2 and a3 are: (1) A(0)1 = 1, A(0)2 = 0, A(0)3 = 1, A(0)4 = 1; (2) B(0)1 = 0, B(0)2 = 1, B(0)3 = 0, B(0)4 = 0; (3) C(0)1 = 0, C(0)2 = 0, C(0)3 = 0, C(0)4 = 1. Based on these check-sums, a1, a2 and a3 are decoded as 1, 0 and 0, respectively. To decode a0, we form r(1) = (01000101)− a1v1 − a2v2 − a3v3 = (00010000). From the bits of r(1), we decode a0 to 0. Therefore, the decoded message is (0001). 4.14 RM(1, 3) = { 0, 1, X1, X2, X3, 1 +X1, 1 +X2, 1 +X3, X1 +X2, X1 +X3, X2 +X3, 1 +X1 +X2, 1 +X1 +X3, 1 +X2 +X3, X1 +X2 +X3, 1 +X1 +X2 +X3}. 4.15 The RM(r,m− 1) and RM(r − 1,m− 1) codes are given as follows (from (4.38)): RM(r,m− 1) = {v(f) : f(X1, X2, . . . , Xm−1) ∈ P(r,m− 1)}, RM(r − 1,m− 1) = {v(g) : g(X1, X2, . . . , Xm−1) ∈ P(r − 1,m− 1)}. Then 19
  • RM(r,m)={v(h) : h = f(X1, X2, . . . , Xm−1) +Xmg(X1, X2, . . . , Xm−1) with f ∈ P(r,m− 1) and g ∈ P(r − 1,m− 1)}. 20
  • Chapter 5 5.6 (a) A polynomial over GF(2) with odd number of terms is not divisible by X + 1, hence it can not be divisible by g(X) if g(X) has (X +1) as a factor. Therefore, the code contains no code vectors of odd weight. (b) The polynomial Xn + 1 can be factored as follows: Xn + 1 = (X + 1)(Xn−1 +Xn−2 + · · ·+X + 1) Since g(X) divides Xn+1 and since g(X) does not have X+1 as a factor, g(X) must divide the polynomial Xn−1 + Xn−2 + · · · + X + 1. Therefore 1 + X + · · · + Xn−2 + Xn−1 is a code polynomial, the corresponding code vector consists of all 1′s. (c) First, we note that no X i is divisible by g(X). Hence, no code word with weight one. Now, suppose that there is a code word v(X) of weight 2. This code word must be of the form, v(X) = X i +Xj with 0 ≤ i < j < n. Put v(X) into the following form: v(X) = X i(1 +Xj−i). Note that g(X) and X i are relatively prime. Since v(X) is a code word, it must be divisible by g(X). Since g(X) and X i are relatively prime, g(X) must divide the polynomial Xj−i+1. However, j − i < n. This contradicts the fact that n is the smallest integer such that g(X) divides Xn + 1. Hence our hypothesis that there exists a code vector of weight 2 is invalid. Therefore, the code has a minimum weight at least 3. 5.7 (a) Note that Xn + 1 = g(X)h(X). Then Xn(X−n + 1) = Xng(X−1)h(X−1) 21
  • 1 +Xn = [ Xn−kg(X−1) ] [ Xkh(X−1) ] = g∗(X)h∗(X). where h∗(X) is the reciprocal of h(X). We see that g∗(X) is factor of Xn + 1. Therefore, g∗(X) generates an (n, k) cyclic code. (b) Let C and C∗ be two (n, k) cyclic codes generated by g(X) and g∗(X) respectively. Let v(X) = v0 + v1X + · · · + vn−1Xn−1 be a code polynomial in C. Then v(X) must be a multiple of g(X), i.e., v(X) = a(X)g(X). Replacing X by X−1 and multiplying both sides of above equality by Xn−1, we obtain Xn−1v(X−1) = [ Xk−1a(X−1) ] [ Xn−kg(X−1) ] Note that Xn−1v(X−1), Xk−1a(X−1) and Xn−kg(X−1) are simply the reciprocals of v(X), a(X) and g(X) respectively. Thus, v∗(X) = a∗(X)g∗(X). (1) From (1), we see that the reciprocal v∗(X) of a code polynomial in C is a code polynomial in C∗. Similarly, we can show the reciprocal of a code polynomial in C∗ is a code polynomial in C. Since v∗(X) and v(X) have the same weight, C∗ and C have the same weight distribution. 5.8 Let C1 be the cyclic code generated by (X + 1)g(X). We know that C1 is a subcode of C and C1 consists all the even-weight code vectors of C as all its code vectors. Thus the weight enumerator A1(z) of C1 should consists of only the even-power terms of A(z) = ∑n i=0Aiz i . Hence A1(z) = bn/2c∑ j=0 A2jz 2j (1) Consider the sum A(z) + A(−z) = n∑ i=0 Aiz i + n∑ i=0 Ai(−z)i 22
  • = n∑ i=0 Ai [ zi + (−z)i] . We see that zi + (−z)i = 0 if i is odd and that zi + (−z)i = 2zi if i is even. Hence A(z) + A(−z) = bn/2c∑ j=0 2A2jz 2j (2) From (1) and (2), we obtain A1(z) = 1/2 [A(z) + A(−z)] . 5.10 Let e1(X) = X i + X i+1 and e2(X) = Xj + Xj+1 be two different double-adjacent-error patterns such that i < j. Suppose that e1(X) and e2(X) are in the same coset. Then e1(X) + e2(X) should be a code polynomial and is divisible by g(X) = (X + 1)p(X). Note that e1(X) + e2(X) = X i(X + 1) +Xj(X + 1) = (X + 1)X i(Xj−i + 1) Since g(X) divides e1(X) + e2(X), p(X) should divide X i(Xj−i + 1). However p(X) and X i are relatively prime. Therefore p(X) must divide Xj−i + 1. This is not possible since j − i < 2m − 1 and p(X) is a primitive polynomial of degree m (the smallest integer n such that p(X) divides Xn + 1 is 2m − 1). Thus e1(X) + e2(X) can not be in the same coset. 5.12 Note that e(i)(X) is the remainder resulting from dividing X ie(X) by Xn + 1. Thus X ie(X) = a(X)(Xn + 1) + e(i)(X) (1) Note that g(X) divides Xn + 1, and g(X) and X i are relatively prime. From (1), we see that if e(X) is not divisible by g(X), then e(i)(X) is not divisible by g(X). Therefore, if e(X) is detectable, e(i)(X) is also detectable. 23
  • 5.14 Suppose that ` does not divide n. Then n = k · `+ r, 0 < r < `. Note that v(n)(X) = v(k·`+r)(X) = v(X) (1) Since v(`)(X) = v(X), v(k·`)(X) = v(X) (2) From (1) and (2), we have v(r)(X) = v(X). This is not possible since 0 < r < ` and ` is the smallest positive integer such that v(`)(X) = v(X). Therefore, our hypothesis that ` does not divide n is invalid, hence ` must divide n. 5.17 Let n be the order of β. Then βn = 1, and β is a root of Xn + 1. It follows from Theorem 2.14 that φ(X) is a factor of Xn + 1. Hence φ(X) generates a cyclic code of length n. 5.18 Let n1 be the order of β1 and n2 be the order of β2. Let n be the least common multiple of n1 and n2, i.e. n = LCM(n1, n2). Consider Xn + 1. Clearly, β1 and β2 are roots of Xn + 1. Since φ1(X) and φ2(X) are factors of Xn + 1. Since φ1(X) and φ2(X) are relatively prime, g(X) = φ1(X) · φ2(X) divides Xn + 1. Hence g(X) = φ1(X) · φ2(X) generates a cyclic code of length n = LCM(n1, n2). 5.19 Since every code polynomial v(X) is a multiple of the generator polynomial p(X), every root of p(X) is a root of v(X). Thus v(X) has α and its conjugates as roots. Suppose v(X) is a binary polynomial of degree 2m − 2 or less that has α as a root. It follows from Theorem 2.14 that v(X) is divisible by the minimal polynomial p(X) of α. Hence v(X) is a code polynomial in the Hamming code generated by p(X). 5.20 Let v(X) be a code polynomial in both C1 and C2. Then v(X) is divisible by both g1(X) and g2(X). Hence v(X) is divisible by the least common multiple g(X) of g1(X) and g2(X), i.e. v(X) is a multiple of g(X) = LCM(g1(X),g2(X)). Conversely, any polynomial of degree n − 1 or less that is a multiple of g(X) is divisible by g1(X) and g2(X). Hence v(X) is in both C1 and C2. Also we note that g(X) is a factor of Xn + 1. Thus the code 24
  • polynomials common toC1 andC2 form a cyclic code of length nwhose generator polynomial is g(X) = LCM(g1(X),g2(X)). The code C3 generated by g(X) has minimum distance d3 ≥ max(d1, d2). 5.21 See Problem 4.3. 5.22 (a) First, we note that X2m−1 + 1 = p∗(X)h∗(X). Since the roots of X2m−1 + 1 are the 2m − 1 nonzero elements in GF(2m) which are all distinct, p∗(X) and h∗(X) are relatively prime. Since every code polynomial v(X) in Cd is a polynomial of degree 2m − 2 or less, v(X) can not be divisible by p(X) (otherwise v(X) is divisible by p∗(X)h∗(X) = X2m−1+1 and has degree at least 2m − 1). Suppose that v(i)(X) = v(X). It follows from (5.1) that X iv(X) = a(X)(X2 m−1 + 1) + v(i)(X) = a(X)(X2 m−1 + 1) + v(X) Rearranging the above equality, we have (X i + 1)v(X) = a(X)(X2 m−1 + 1). Since p(X) divides X2m−1 + 1, it must divide (X i + 1)v(X). However p(X) and v(X) are relatively prime. Hence p(X) divides X i + 1. This is not possible since 0 < i < 2m − 1 and p(X) is a primitive polynomial(the smallest positive integer n such that p(X) divides Xn+1 is n = 2m−1). Therefore our hypothesis that, for 0 < i < 2m−1, v(i)(X) = v(X) is invalid, and v(i)(X) 6= v(X). (b) From part (a), a code polynomial v(X) and its 2m − 2 cyclic shifts form all the 2m − 1 nonzero code polynomials in Cd. These 2m − 1 nonzero code polynomial have the same weight, sayw. The total number of nonzero components in the code words ofCd isw·(2m−1). Now we arrange the 2m code words in Cd as an 2m× (2m− 1) array. It follows from Problem 3.6(b) that every column in this array has exactly 2m−1 nonzero components. Thus the total nonzero components in the array is 2m−1 · (2m− 1). Equating w.(2m − 1) to 2m−1 · (2m − 1), we have w = 2m−1. 25
  • 5.25 (a) Any error pattern of double errors must be of the form, e(X) = X i +Xj where j > i. If the two errors are not confined to n− k = 10 consecutive positions, we must have j − i+ 1 > 10, 15− (j − i) + 1 > 10. Simplifying the above inequalities, we obtain j − i > 9 j − i < 6. This is impossible. Therefore any double errors are confined to 10 consecutive positions and can be trapped. (b) An error pattern of triple errors must be of the form, e(X) = X i +Xj +Xk, where 0 ≤ i < j < k ≤ 14. If these three errors can not be trapped, we must have k − i > 9 j − i < 6 k − j < 6. If we fix i, the only solutions for j and k are j = 5+ i and k = 10+ i. Hence, for three errors not confined to 10 consecutive positions, the error pattern must be of the following form e(X) = X i +X5+i +X10+i 26
  • for 0 ≤ i < 5. Therefore, only 5 error patterns of triple errors can not be trapped. 5.26 (b) Consider a double-error pattern, e(X) = X i +Xj where 0 ≤ i < j < 23. If these two errors are not confined to 11 consecutive positions, we must have j − i+ 1 > 11 23− (j − i− 1) > 11 From the above inequalities, we obtain 10 < j − i < 13 For a fixed i, j has two possible solutions, j = 11+i and j = 12+i. Hence, for a double-error pattern that can not be trapped, it must be either of the following two forms: e1(X) = X i +X11+i, e1(X) = X i +X12+i. There are a total of 23 error patterns of double errors that can not be trapped. 5.27 The coset leader weight distribution is α0 = 1, α1 = ( 23 1 ) , α2 = ( 23 2 ) , α3 = ( 23 3 ) α4 = α5 = · · · = α23 = 0 The probability of a correct decoding is P (C) = (1− p)23 + ( 23 1 ) p(1− p)22 + ( 23 2 ) p2(1− p)21 27
  • + ( 23 3 ) p3(1− p)20. The probability of a decoding error is P (E) = 1− P (C). 5.29(a) Consider two single-error patterns, e1(X) = X i and e2(X) = Xj , where j > i. Suppose that these two error patterns are in the same coset. Then X i + Xj must be divisible by g(X) = (X3 + 1)p(X). This implies that Xj−i + 1 must be divisible by p(X). This is impossible since j − i < n and n is the smallest positive integer such that p(X) divides Xn + 1. Therefore no two single-error patterns can be in the same coset. Consequently, all single-error patterns can be used as coset leaders. Now consider a single-error pattern e1(X) = X i and a double-adjacent-error pattern e2(X) = Xj + Xj+1, where j > i. Suppose that e1(X) and e2(X) are in the same coset. Then X i+Xj+Xj+1 must be divisible by g(X) = (X3+1)p(X). This is not possible since g(X) has X + 1 as a factor, however X i +Xj +Xj+1 does not have X + 1 as a factor. Hence no single-error pattern and a double-adjacent-error pattern can be in the same coset. Consider two double-adjacent-error patterns,X i+X i+1 and Xj+Xj+1 where j > i. Suppose that these two error patterns are in the same cosets. Then X i + X i+1 + Xj + Xj+1 must be divisible by (X3 + 1)p(X). Note that X i +X i+1 +Xj +Xj+1 = X i(X + 1)(Xj−i + 1). We see that for X i(X + 1)(Xj−i + 1) to be divisible by p(X), Xj−i + 1 must be divisible by p(X). This is again not possible since j − i < n. Hence no two double-adjacent-error patterns can be in the same coset. Consider a single error pattern X i and a triple-adjacent-error pattern Xj + Xj+1 + Xj+2. If these two error patterns are in the same coset, then X i+Xj +Xj+1+Xj+2 must be divisible by (X3+1)p(X). But X i+Xj +Xj+1+Xj+2 = X i+Xj(1+X +X2) is not divisible by X3+1 = (X+1)(X2+X+1). Therefore, no single-error pattern and a triple-adjacent-error pattern can be in the same coset. Now we consider a double-adjacent-error pattern X i+X i+1 and a triple-adjacent-error pattern 28
  • Xj +Xj+1 +Xj+2. Suppose that these two error patterns are in the same coset. Then X i +X i+1 +Xj +Xj+1 +Xj+2 = X i(X + 1) +Xj(X2 +X + 1) must be divisible by (X3+1)p(X). This is not possible since X i+X i+1+Xj+Xj+1+Xj+2 does not have X+1 as a factor but X3+1 has X+1 as a factor. Hence a double-adjacent-error pattern and a triple-adjacent-error pattern can not be in the same coset. Consider two triple-adjacent-error patterns, X i + X i+1 + X i+2 and Xj + Xj+1 + Xj+2. If they are in the same coset, then their sum X i(X2 +X + 1)(1 +Xj−i) must be divisible by (X3 + 1)p(X), hence by p(X). Note that the degree of p(X) is 3 or greater. Hence p(X) and (X2 +X + 1) are relatively prime. As a result, p(X) must divide Xj−i+1. Again this is not possible. Hence no two triple-adjacent-error patterns can be in the same coset. Summarizing the above results, we see that all the single-, double-adjacent-, and triple- adjacent-error patterns can be used as coset leaders. 29
  • Chapter 6 6.1 (a) The elements β, β2 and β4 have the same minimal polynomial φ1(X). From table 2.9, we find that φ1(X) = 1 +X 3 +X4 The minimal polynomial of β3 = α21 = α6 is φ3(X) = 1 +X +X 2 +X3 +X4. Thus g0(X) = LCM(φ1(X), φ2(X)) = (1 +X3 +X4)(1 +X +X2 +X3 +X4) = 1 +X +X2 +X4 +X8. (b) H =  1 β β2 β3 β4 β5 β6 β7 β8 β9 β10 β11 β12 β13 β14 1 β3 β6 β9 β12 β15 β18 β21 β24 β27 β30 β33 β36 β39 β42  H =  1 1 1 0 1 0 1 1 0 0 1 0 0 0 1 0 1 0 0 0 1 1 1 1 0 1 0 1 1 0 0 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 1 1 1 1 0 1 0 1 1 0 0 1 0 0 1 0 1 0 0 1 0 1 0 0 1 0 1 0 0 0 0 1 0 1 0 0 1 0 1 0 0 1 0 1 0 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 1 1 0 1 1 1 1 0 1 1 1 1  . 30
  • (c) The reciprocal of g(X) in Example 6.1 is X8g(X−1) = X8(1 +X−4 +X−6 +X−7 +X−8 = X8 +X4 +X2 +X + 1 = g0(X) 6.2 The table for GF (s5) with p(X) = 1 + X2 + X5 is given in Table P.6.2(a). The minimal polynomials of elements in GF (2m) are given in Table P.6.2(b). The generator polynomials of all the binary BCH codes of length 31 are given in Table P.6.2(c) Table P.6.2(a) Galois Field GF(25) with p(α) = 1 + α2 + α5 = 0 0 (0 0 0 0 0) 1 (1 0 0 0 0) α (0 1 0 0 0) α2 (0 0 1 0 0) α3 (0 0 0 1 0) α4 (0 0 0 0 1) α5 = 1 + α2 (1 0 1 0 0) 31
  • Table P.6.2(a) Continued α6 = α + α3 (0 1 0 1 0) α7 = α2 + α4 (0 0 1 0 1) α8 = 1 + α2 + α3 (1 0 1 1 0) α9 = α + α3 + α4 (0 1 0 1 1) α10 = 1 + α4 (1 0 0 0 1) α11 = 1 + α + α2 (1 1 1 0 0) α12 = α + α2 + α3 (0 1 1 1 0) α13 = α2 + α3 + α4 (0 0 1 1 1) α14 = 1 + α2 + α3 + α4 (1 0 1 1 1) α15 = 1 + α + α2 + α3 + α4 (1 1 1 1 1) α16 = 1 + α + α3 + α4 (1 1 0 1 1) α17 = 1 + α + α4 (1 1 0 0 1) α18 = 1 + α (1 1 0 0 0) α19 = α + α2 (0 1 1 0 0) α20 = α2 + α3 (0 0 1 1 0) α21 = α3 + α4 (0 0 0 1 1) α22 = 1 + α2 + α4 (1 0 1 0 1) α23 = 1 + α + α2 + α3 (1 1 1 1 0) α24 = α + α2 + α3 + α4 (0 1 1 1 1) α25 = 1 + α3 + α4 (1 0 0 1 1) α26 = 1 + α + α2 + α4 (1 1 1 0 1) α27 = 1 + α + α3 (1 1 0 1 0) α28 = α + α2 + α4 (0 1 1 0 1) α29 = 1 + + α3 (1 0 0 1 0) α30 = α + α4 (0 1 0 0 1) 32
  • Table P.6.2(b) Conjugate Roots φi(X) 1 α, α2, α4, α8, α16 α3, α6, α12, α24, α17 α5, α10, α20, α9, α18 α7, α14, α28, α25, α19 α11, α22, α13, α26, α21 α15, α30, α29, α27, α23 1 + X 1 + X2 + X5 1 + X2 + X3 + X4 + X5 1 + X + X2 + X4 + X5 1 + X + X2 + X3 + X5 1 + X + X3 + X4 + X5 1 + X3 + X5 Table P.6.2(c) n k t g(X) 31 26 1 g1(X) = 1 +X2 +X5 21 2 g2(X) = φ1(X)φ3(X) 16 3 g3(X) = φ1(X)φ3(X)φ5(X) 11 5 g4(X) = φ1(X)φ3(X)φ5(X)φ7(X) 6 7 g5(X) = φ1(X)φ3(X)φ5(X)φ7(X)φ11(X) 6.3 (a) Use the table for GF (25) constructed in Problem 6.2. The syndrome components of r1(X) = X 7 +X30 are: S1 = r1(α) = α 7 + α30 = α19 S2 = r1(α 2) = α14 + α29 = α7 S3 = r1(α 3) = α21 + α28 = α12 33
  • S4 = r1(α 4) = α28 + α27 = α14 The iterative procedure for finding the error location polynomial is shown in Table P.6.3(a) Table P.6.3(a) µ σ(µ)(X) dµ `µ 2µ− `µ -1/2 1 1 0 -1 0 1 α19 0 0 1 1 + α19X α25 1 1(ρ = −1/2) 2 1 + α19X + α6X2 – 2 2(ρ = 0) Hence σ(X) = 1 + α19X + α6X2. Substituting the nonzero elements of GF (25) into σ(X), we find that σ(X) has α and α24 as roots. Hence the error location numbers are α−1 = α30 and α−24 = α7. As a result, the error polynomial is e(X) = X7 +X30. The decoder decodes r1(X) into r1(X) + e(X) = 0. (b) Now we consider the decoding of r2(X) = 1+X17+X28. The syndrome components of r2(X) are: S1 = r2(α) = α 2, S2 = S 2 1 = α 4, S4 = S 2 2 = α 8, S3 = r2(α 3) = α21. The error location polynomial σ(X) is found by filling Table P.6.3(b): 34
  • Table P.6.3(b) µ σ(µ)(X) dµ `µ 2µ− `µ -1/2 1 1 0 -1 0 1 α2 0 0 1 1 + α2X α30 1 1(ρ = −1/2) 2 1 + α2X + α28X2 – 2 2(ρ = 0) The estimated error location polynomial is σ(X) = 1 + α2X + α28X2 This polynomial does not have roots in GF (25), and hence r2(X) cannot be decoded and must contain more than two errors. 6.4 Let n = (2t+ 1)λ. Then (Xn + 1) = (Xλ + 1)(X2tλ +X(2t−1)λ + · · ·+Xλ + 1 The roots of Xλ + 1 are 1, α2t+1, α2(2t+1), · · · , α(λ−1)(2t+1). Hence, α, α2, · · · , α2t are roots of the polynomial u(X) = 1 +Xλ +X2λ + · · ·+X(2t−1)λ +X2tλ. This implies that u(X) is code polynomial which has weight 2t + 1. Thus the code has minimum distance exactly 2t+ 1. 6.5 Consider the Galois field GF (22m). Note that 22m − 1 = (2m − 1) · (2m + 1). Let α be a primitive element in GF (22m). Then β = α(2m−1) is an element of order 2m + 1. The elements 1, β, β2, β2, β3, β4, · · · , β2m are all the roots ofX2m+1+1. Let ψi(X) be the minimal 35
  • polynomial of βi. Then a t-error-correcting non-primitive BCH code of length n = 2m + 1 is generated by g(X) = LCM {ψ1(X), ψ2(X), · · · , ψ2t(X)} . 6.10 Use Tables 6.2 and 6.3. The minimal polynomial for β2 = α6 and β4 = α12 is ψ2(X) = 1 +X +X 2 +X4 +X6. The minimal polynomial for β3 = α9 is ψ3(X) = 1 +X 2 +X3. The minimal polynomial for β5 = α15 is ψ5(X) = 1 +X 2 +X4 +X5 +X6. Hence g(X) = ψ2(X)ψ3(X)ψ5(X) The orders of β2, β3 and β5 are 21,7 and 21 respectively. Thus the length is n = LCM(21, 7, 21), and the code is a double-error-correcting (21,6) BCH code. 6.11 (a) Let u(X) be a code polynomial and u∗(X) = Xn−1u(X−1) be the reciprocal of u(X). A cyclic code is said to be reversible if u(X) is a code polynomial then u∗(X) is also a code polynomial. Consider u∗(βi) = β(n−1)iu(β−i) Since u(β−i) = 0 for−t ≤ i ≤ t, we see that u∗(βi) has β−t, · · · , β−1, β0, β1, · · · , βt as roots 36
  • and is a multiple of the generator polynomial g(X). Therefore u∗(X) is a code polynomial. (b) If t is odd, t+1 is even. Hence βt+1 is the conjugate of β(t+1)/2 and β−(t+1) is the conjugate of β−(t+1)/2. Thus βt+1 and β−(t+1) are also roots of the generator polynomial. It follows from the BCH bound that the code has minimum distance 2t + 4 (Since the generator polynomial has (2t+ 3 consecutive powers of β as roots). 37
  • Chapter 7 7.2 The generator polynomial of the double-error-correcting RS code over GF(25) is g(X) = (X + α)(X + α2)(X + α3)(X + α4) = α10 + α29X + α19X2 + α24X3 +X4. The generator polynomial of the triple-error-correcting RS code over GF(25) is g(X) = (X + α)(X + α2)(X + α3)(X + α4)(X + α5)(X + α6) = α21 + α24X + α16X2 + α24X3 + α9X4 + α10X5 +X6. 7.4 The syndrome components of the received polynomial are: S1 = r(α) = α 7 + α2 + α = α13, S2 = r(α 2) = α10 + α10 + α14 = α14, S3 = r(α 3) = α13 + α3 + α12 = α9, S4 = r(α 4) = α + α11 + α10 = α7, S5 = r(α 5) = α4 + α4 + α8 = α8, S6 = r(α 6) = α7 + α12 + α6 = α3. The iterative procedure for finding the error location polynomial is shown in Table P.7.4. The error location polynomial is σ(X) = 1 + α9X3. The roots of this polynomial are α2, α7, and α12. Hence the error location numbers are α3, α8, and α13. From the syndrome components of the received polynomial and the coefficients of the error 1
  • Table P.7.4 µ σµ(X) dµ lµ µ− lµ −1 1 1 0 −1 0 1 α13 0 0 1 1 + α13X α10 1 0 (take ρ = −1) 2 1 + αX α7 1 1 (take ρ = 0) 3 1 + α13X + α10X2 α9 2 1 (take ρ = 1) 4 1 + α14X + α12X2 α8 2 2 (take ρ = 2) 5 1 + α9X3 0 3 2 (take ρ = 3) 6 1 + α9X3 − − − location polynomial, we find the error value evaluator, Z0(X) = S1 + (S2 + σ1S1)X + (S3 + σ1S2 + σ2S1)X 2 = α13 + (α14 + 0α13)X + (α9 + 0α14 + 0α13)X2 = α13 + α14X + α9X2. The error values at the positions X3, X8, and X13 are: e3 = −Z0(α−3) σ′(α−3) = α13 + α11 + α3 α3(1 + α8α−3)(1 + α13α−3) = α7 α3 = α4, e8 = −Z0(α−8) σ′(α−8) = α13 + α6 + α8 α8(1 + α3α−8)(1 + α13α−8) = α2 α8 = α9, e13 = −Z0(α−13) σ′(α−13) = α13 + α+ α13 α13(1 + α3α−13)(1 + α8α−13) = α α13 = α3. Consequently, the error pattern is e(X) = α4X3 + α9X8 + α3X13. and the decoded codeword is the all-zero codeword. 2
  • 7.5 The syndrome polynomial is S(X) = α13 + α14X + α9X2 + α7X3 + α8X4 + α3X5 Table P.7.5 displays the steps of Euclidean algorithm for finding the error location and error value polynomials. Table P.7.5 i Z (i) 0 (X) qi(X) σi(X) −1 X6 − 0 0 α13 + α14X + α9X2 + α7X3 + α8X4 + α3X5 − 1 1 1 + α8X + α5X3 + α2X4 α2 + α12X α2 + α12X 2 α + α13X + α12X3 α12 + αX α3 + αX + α13X2 3 α7 + α8X + α3X2 α8 + α5X α9 + α3X3 The error location and error value polynomials are: σ(X) = α9 + α3X3 = α9(1 + α9X3) Z0(X) = α 7 + α8X + α3X2 = α9(α13 + α14X + α9X2) From these polynomials, we find that the error location numbers are α3, α8, and α13, and error values are e3 = −Z0(α−3) σ′(α−3) = α7 + α5 + α12 α9α3(1 + α8α−3)(1 + α13α−3) = α α12 = α4, e8 = −Z0(α−8) σ′(α−8) = α7 + 1 + α2 α9α8(1 + α3α−8)(1 + α13α−8) = α11 α2 = α9, e13 = −Z0(α−13) σ′(α−13) = α7 + α10 + α7 α9α13(1 + α3α−13)(1 + α8α−13) = α10 α7 = α3. 3
  • Hence the error pattern is e(X) = α4X3 + α9X8 + α3X13. and the received polynomial is decoded into the all-zero codeword. 7.6 From the received polynomial, r(X) = α2 + α21X12 + α7X20, we compute the syndrome, S1 = r(α 1) = α2 + α33 + α27 = α27, S2 = r(α 2) = α2 + α45 + α47 = α, S3 = r(α 3) = α2 + α57 + α67 = α28, S4 = r(α 4) = α2 + α69 + α87 = α29, S5 = r(α 5) = α2 + α81 + α107 = α15, S6 = r(α 6) = α2 + α93 + α127 = α8. Therefore, the syndrome polynomial is S(X) = α27 + αX + α28X2 + α29X3 + α15X4 + α8X5 Using the Euclidean algorithm, we find σ(X) = α23X3 + α9X + α22, Z0(X) = α 26X2 + α6X + α18, as shown in the following table: The roots of σ(X) are: 1 = α0, α11 and α19. From these roots, we find the error location numbers: β1 = (α0)−1 = α0, β2 = (α11)−1 = α20, and 4
  • i Z (i) 0 (X) qi(X) σi(X) -1 X6 - 0 0 S(X) - 1 1 α5X4 + α9X3 + α22X2 + α11X + α26 α23X + α30 α23X + α30 2 α8X3 + α4X + α6 α3X + α5 α24X2 + α30X + α10 3 α26X2 + α6X + α18 α28X + α α23X3 + α9X + α22 β3 = (α19)−1 = α12. Hence the error pattern is e(X) = e0 + e12X 12 + e20X 20. The error location polynomial and its derivative are: σ(X) = α22(1 +X)(1 + α12X)(1 + α20X), σ′(X) = α22(1 + α12X)(1 + α20X) + α3(1 +X)(1 + α20X) + α11(1 +X)(1 + α12X). The error values at the 3 error locations are given by: e0 = −Z0(α0) σ′(α0) = α26 + α6 + α8 α22(1 + α12)(1 + α20) = α2, e12 = −Z0(α−12) σ′(α−12) = α2 + α25 + α18 α3(1 + α19)(1 + α8) = α21, e20 = −Z0(α−20) σ′(α−20) = α17 + α17 + α18 α11(1 + α11)(1 + α23) = α7. Hence, the error pattern is e(X) = α2 + α21X12 + α7X20 and the decoded codeword is v(X) = r(X)− e(X) = 0. 5
  • 7.9 Let g(X) be the generator polynomial of a t-symbol correcting RS code C over GF(q) with α, α2, . . . , α2t as roots, where α is a primitive element of GF(q). Since g(X) divides Xq−1 − 1, then Xq−1 − 1 = g(X)h(X). The polynomial h(X) has α2t+1, . . . , αq−1 as roots and is called the parity polynomial. The dual code Cd of C is generated by the reciprocal of h(X), h∗(X) = Xq−1−2th(X−1). We see that h∗(X) has α−(2t+1) = αq−2t−2, α−(2t+2) = αq−2t−3, . . . , α−(q−2) = α, and α−(q−1) = 1 as roots. Thus h∗(X) has the following consecutive powers of α as roots: 1, α, α2, . . . , αq−2t−2. Hence Cd is a (q − 1, 2t, q − 2t) RS code with minimum distance q − 2t. 7.10 The generator polynomial grs(X) of the RS code C has α, α2, . . . , αd−1 as roots. Note that GF(2m) has GF(2) as a subfield. Consider those polynomial v(X) over GF(2) with degree 2m−2 or less that has α, α2, . . . , αd−1 (also their conjugates) as roots. These polynomials over GF(2) form a primitive BCH code Cbch with designed distance d. Since these polynomials are also code polynomials in the RS code Crs, hence Cbch is a subcode of Crs. 7.11 Suppose c(X) = ∑2m−2 i=0 ciX i is a minimum weight code polynomial in the (2m − 1, k) RS code C. The minimum weight is increased to d+ 1 provided c∞ = −c(1) = − 2m−2∑ i=0 ci 6= 0. We know that c(X) is divisible by g(X). Thus c(X) = a(X)g(X) with a(X) 6= 0. Consider c(1) = a(1)g(1). Since 1 is not a root of g(X), g(1) 6= 0. If a(1) 6= 0, then c∞ = −c(1) 6= 0 and the vector (c∞, c0, c1, . . . , c2m−2) has weight d+1. Next we show that a(1) is not equal to 0. If a(1) = 0, 6
  • then a(X) has X − 1 as a factor and c(X) is a multiple of (X − 1)g(X) and must have a weight at least d+ 1. This contradicts to the hypothesis that c(X) is a minimum weight code polynomial. Consequently the extended RS code has a minimum distance d+ 1. 7.12 To prove the minimum distance of the doubly extended RS code, we need to show that no 2t or fewer columns of H1 sum to zero over GF(2m) and there are 2t + 1 columns in H1 sum to zero. Suppose there are δ columns in H1 sum to zero and δ ≤ 2t. There are 4 case to be considered: (1) All δ columns are from the same submatrix H. (2) The δ columns consist of the first column of H1 and δ − 1 columns from H. (3) The δ columns consist of the second column of H1 and δ − 1 columns from H. (4) The δ columns consist of the first two columns of H1 and δ − 2 columns from H. The first case leads to a δ × δ Vandermonde determinant. The second and third cases lead to a (δ − 1) × (δ − 1) Vandermonde determinant. The 4th case leads to a (δ − 2) × (δ − 2) Vandermonde determinant. The derivations are exactly the same as we did in the book. Since Vandermonde determinants are nonzero, δ columns of H1 can not be sum to zero. Hence the minimum distance of the extended RS code is at least 2t + 1. However, H generates an RS code with minimum distance exactly 2t + 1. There are 2t + 1 columns in H (they are also in H1), which sum to zero. Therefore the minimum distance of the extended RS code is exactly 2t+ 1. 7.13 Consider v(X) = 2m−2∑ i=0 a(αi)X i = 2m−2∑ i=0 ( k−1∑ j=0 ajα ij)X i Let α be a primitive element in GF(2m). Replacing X by αq, we have v(αq) = 2m−2∑ i=0 k−1∑ j=0 ajα ijαiq = k−1∑ j=0 aj( 2m−2∑ i=0 αi(j+q)). 7
  • We factor 1 +X2−1 as follows: 1 +X2 m−1 = (1 +X)(1 +X +X2 + · · ·+X2m−2) Since the polynomial 1 + X + X2 + · · · + X2m−2 has α, α2, . . . , α2m−2 as roots, then for 1 ≤ l ≤ 2m − 2, 2m−2∑ i=0 αli = 1 + αl + α2l + · · ·+ α(2m−2)l = 0. Therefore, ∑2m−2 i=0 α i(j+q) = 0 when 1 ≤ j + q ≤ 2m − 2. This implies that v(αq) = 0 for 0 ≤ j < k and 1 ≤ q ≤ 2m − k − 1. Hence v(X) has α, α2, . . . , α2m−k−1 as roots. The set {v(X)} is a set of polynomial over GF(2m) with 2m−k−1 consecutive powers of α as roots and hence it forms a (2m−1, k, 2m− k) cyclic RS code over GF(2m). 8
  • Chapter 8 8.2 The order of the perfect difference set {0, 2, 3} is q = 2. (a) The length of the code n = 22 + 2 + 1 = 7. (b) Let z(X) = 1 +X2 +X3. Then the parity-check polynomial is h(X) = GCD{1 +X2 +X3, X7 + 1} = 1 +X2 +X3. (c) The generator polynomial is g(X) = X7 + 1 h(X) = 1 +X2 +X3 +X4. (d) From g(X), we find that the generator matrix in systematic form is G =  1 0 1 1 1 0 0 1 1 1 0 0 1 0 0 1 1 1 0 0 1  . The parity-check matrix in systematic form is H =  1 0 0 0 1 1 0 0 1 0 0 0 1 1 0 0 1 0 1 1 1 0 0 0 1 1 0 1  . The check-sums orthogonal on the highest order error digit e6 are: A1 = s0 + s2, A2 = s1, A3 = s3. Based on the above check-sum, a type-1 decoder can be implemented. 8.4 (a) Since all the columns of H are distinct and have odd weights, no two or three columns can sum to zero, hence the minimum weight of the code is at least 4. However, the first, 1
  • the second, the third and the 6th columns sum to zero. Therefore the minimum weight, hence the minimum distance, of the code is 4. (b) The syndrome of the error vector e is s = (s0, s1, s2, s3, s4) = eH T with s0 = e0 + e5 + e6 + e7 + e8 + e9 + e10, s1 = e1 + e5 + e6 + e8, s2 = e2 + e5 + e7 + e9, s3 = e3 + e6 + e7 + e10, s4 = e4 + e8 + e9 + e10. (c) The check-sums orthogonal on e10 are: A1,10 = s0 + s1 + s2 = e0 + e1 + e2 + e5 + e10, A2,10 = s3 = e3 + e6 + e7 + e10, A3,10 = s4 = e4 + e8 + e9 + e10. The check-sums orthogonal on e9 are: A1,9 = s0 + s1 + s3, A2,9 = s2, A3,9 = s4. The check-sums orthogonal on e8 are: A1,8 = s0 + s2 + s3, A2,8 = s1, A3,8 = s4. 2
  • The check-sums orthogonal on e7 are: A1,7 = s0 + s1 + s4, A2,7 = s2, A3,7 = s3. The check-sums orthogonal on e6 are: A1,6 = s0 + s2 + s4, A2,6 = s1, A3,6 = s3. The check-sums orthogonal on e5 are: A1,5 = s0 + s3 + s4, A2,5 = s1, A3,5 = s2. (d) Yes, the code is completely orthogonalizable, since there are 3 check-sums orthogonal on each message bit and the minimum distance of the code is 4. 8.5 For m = 6, the binary radix-2 form of 43 is 43 = 1 + 2 + 23 + 25. The nonzero-proper descendants of 43 are: 1, 2, 8, 32, 3, 9, 33, 10, 34, 40, 11, 35, 41, 42. 8.6 α∞ α0 α1 α2 α3 α4 α5 α6 α7 α8 α9 α10 α11 α12 α13 α14 u = ( 1 1 1 0 0 1 0 1 0 1 1 0 0 0 0 1 ) Applying the permutation Z = α3Y + α11 to the above vector, the component at the location αi is permute to the location α3αi+α11. For example, the 1-component at the location Y = α8 is permuted to the location α3α8 + α11 = α11 + α11 = α∞. Performing this permutation to 3
  • the each component of the above vector, we obtain the following vector α∞ α0 α1 α2 α3 α4 α5 α6 α7 α8 α9 α10 α11 α12 α13 α14 v = ( 1 1 0 1 0 0 1 0 0 1 1 0 1 0 1 0 ) 8.7 For J = 9 and L = 7, X63 + 1 can be factor as follows: X63 + 1 = (1 +X9)(1 +X9 +X18 +X27 +X36 +X45 +X54). Then pi(X) = 1+X9+X18+X27+X36+X45+X54. Let α be a primitive element in GF(26) whose minimal polynomial is φ1(X) = 1+X+X6. Because α63 = 1, the polynomial 1+X9 has α0, α7, α14, α21, α28, α35, α42, α49, and α56 as all it roots. Therefore, the polynomial pi(X) has αh as a root when h is not a multiple of 7 and 0 < h < 63. From the conditions (Theorem 8.2) on the roots of H(X), we can find H(X) as: H(X) = LCM{minimal polynomials φi(X) of the roots of H(X)}. As the result, H(X) = φ1(X)φ3(X)φ5(X)φ9(X)φ11(X)φ13(X)φ27(X), where φ1 = 1 + X + X6, φ3 = 1 + X + X2 + X4 + X6, φ5 = 1 + X + X2 + X5 + X6, φ9 = 1 +X 2 +X3, φ11 = 1 +X 2 +X3 +X5 +X6, φ13 = 1 +X +X 3 +X4 +X6, and φ27 = 1 +X +X 3 . Then, we can find G(X), G(X) = X63 + 1 H(X) = (1 +X9)pi(X) H(X) = (1 +X9)(1 +X2 +X4 +X5 +X6)(1 +X +X4 +X5 +X6)(1 +X5 +X6). For type-1 DTI code of length 63 and J = 9, the generator polynomial is: g1(X) = X27G(X−1) 1 +X = (1 +X9)(1 +X +X2 +X4 +X6)(1 +X +X2 +X5 +X6)(1 +X +X6) 1 +X = (1 +X +X2 +X3 +X4 +X5 +X6 +X7 +X8)(1 +X +X2 +X4 +X6) (1 +X +X2 +X5 +X6)(1 +X +X6). Represent the polynomials pi(X), Xpi(X), X2pi(X), X3pi(X), X4pi(X), X5pi(X), X6pi(X), X7pi(X), and X8pi(X) by 63-tuple location vectors. Add an overall parity-check digit and apply the affine permutation, Y = αX + α62, to each of these location vectors. Then, remove the overall parity-check digits of all location vectors. By removing one vector with odd weight, 4
  • we can obtain the polynomials orthogonal on the digit position X62. They are: X11 +X16 +X18 +X24 +X48 +X58 +X59 +X62, X1 +X7 +X31 +X41 +X42 +X45 +X57 +X62, X23 +X33 +X34 +X37 +X49 +X54 +X56 +X62, X2 +X14 +X19 +X21 +X27 +X51 +X61 +X62, X0 +X3 +X15 +X20 +X22 +X28 +X52 +X62, X9 +X10 +X13 +X25 +X30 +X32 +X38 +X62, X4 +X6 +X12 +X36 +X46 +X47 +X50 +X62, X5 +X29 +X39 +X40 +X43 +X55 +X60 +X62. 8.8 For J = 7 and L = 9, X63 + 1 = (1 +X7)(1 +X7 +X14 +X21 +X28 +X35 +X42 +X49 +X56) and pi(X) = 1+X7+X14+X21+X28+X35+X42+X49+X56. Let α be a primitive element in GF(26) whose minimal polynomial is φ1(X) = 1+X+X6. Because α63 = 1, the polynomial 1 +X7 has α0, α9, α18, α27, α36, α45, and α54 as all it roots. Therefore, the polynomial pi(X) has αh as a root when h is not a multiple of 9 and 0 < h < 63. From the conditions (Theorem 8.2) on the roots of H(X), we can find H(X) as: H(X) = LCM{minimal polynomials φi(X) of the roots of H(X)}. As the result, H(X) = φ1(X)φ3(X)φ5(X)φ7(X)φ21(X), where φ1 = 1 + X + X6, φ3 = 1 + X + X2 + X4 + X6, φ5 = 1 + X + X2 + X5 + X6, φ7 = 1 +X 3 +X6, and φ21 = 1 +X +X2. Then, we can find G(X), G(X) = X63 + 1 H(X) = (1 +X7)pi(X) H(X) = (1 +X7)(1 +X2 +X3 +X5 +X6)(1 +X +X3 +X4 +X6) (1 +X2 +X4 +X5 +X6)(1 +X +X4 +X5 +X6)(1 +X5 +X6). 5
  • For type-1 DTI code of length 63 and J = 7, the generator polynomial is: g1(X) = X37G(X−1) 1 +X = (1 +X7)(1 +X +X3 +X4 +X6)(1 +X2 +X3 +X5 +X6) 1 +X (1 +X +X2 +X4 +X6)(1 +X +X2 +X5 +X6)(1 +X +X6) = (1 +X +X2 +X3 +X4 +X5 +X6)(1 +X +X3 +X4 +X6)(1 +X2 +X3 +X5 +X6) (1 +X +X2 +X4 +X6)(1 +X +X2 +X5 +X6)(1 +X +X6). Represent the polynomials pi(X), Xpi(X), X2pi(X), X3pi(X), X4pi(X), X5pi(X), and X6pi(X) by 63-tuple location vectors. Add an overall parity-check digit and apply the affine permutation, Y = αX + α62, to each of these location vectors. By removing one vector with odd weight, we can obtain the polynomials orthogonal on the digit position X62. They are: X11 +X14 +X32 +X36 +X43 +X44 +X45 +X52 +X56 +X62, X3 +X8 +X13 +X19 +X31 +X33 +X46 +X48 +X60 +X62, X2 +X10 +X23 +X24 +X26 +X28 +X29 +X42 +X50 +X62, X1 +X6 +X9 +X15 +X16 +X35 +X54 +X55 +X61 +X62, X0 +X4 +X7 +X17 +X27 +X30 +X34 +X39 +X58 +X62, X5 +X21 +X22 +X38 +X47 +X49 +X53 +X57 +X59 +X62. 8.9 The generator polynomial of the maximum-length sequence code of length n = 2m − 1 is g(X) = (Xn + 1)/p(X) = (X + 1)(1 +X +X2 + . . .+Xn−1)/p(X), where p(X) is a primitive polynomial of degree m over GF(2). Since p(X) and (X + 1) are relatively prime, g(X) has 1 as a root. Since the all-one vector 1 + X + X2 + . . . + Xn−1 does not have 1 as a root, it is not divisible by g(X). Therefore, the all-one vector is not a codeword in a maximum-length sequence code. 8.17 There are five 1-flats that pass through the point α7. The 1-flats passing through α7 can be represented by α7 + βa1, where a1 is linearly independent of α7 and β ∈ GF (22). They are 6
  • five 1-flats passing through α7 which are: L1 = {α7, α9, α13, α6}, L2 = {α7, α14, α10, α8}, L3 = {α7, α12, 0, α2}, L4 = {α7, α4, α11, α5}, L5 = {α7, α3, 1, α}. 8.18 (a) There are twenty one 1-flats that pass through the point α63. They are: L1 = {α63, 0, α42, α21} L2 = {α63, α6, α50, α39} L3 = {α63, α12, α15, α37} L4 = {α63, α32, α4, α9} L5 = {α63, α24, α11, α30} L6 = {α63, α62, α7, α17} L7 = {α63, α1, α18, α8} L8 = {α63, α26, α41, α38} L9 = {α63, α48, α60, α22} L10 = {α63, α45, α46, α53} L11 = {α63, α61, α34, α14} L12 = {α63, α25, α3, α51} L13 = {α63, α2, α16, α36} L14 = {α63, α35, α31, α40} L15 = {α63, α52, α13, α19} L16 = {α63, α23, α54, α58} L17 = {α63, α33, α44, α57} L18 = {α63, α47, α49, α20} 7
  • L19 = {α63, α27, α43, α29} L20 = {α63, α56, α55, α10} L21 = {α63, α59, α28, α5} (b) There are five 2-flats that intersect on the 1-flat, {α63 + ηα}, where η ∈ GF (22). They are: F1 = {1, α6, α50, α39, 0, α, α22, α43, α42, α29, α18, α48, α21, α60, α27, α8} F2 = {1, α6, α50, α39, α12, α26, α10, α46, α15, α53, α41, α56, α37, α55, α45, α38} F3 = {1, α6, α50, α39, α32, α35, α20, α57, α4, α33, α31, α47, α9, α49, α44, α40} F4 = {1, α6, α50, α39, α24, α16, α34, α17, α11, α62, α36, α14, α30, α61, α7, α2} F5 = {1, α6, α50, α39, α25, α5, α54, α52, α3, α13, α59, α58, α51, α23, α19, α28} 8.19 The 1-flats that pass through the point α21 are: L1 = {α21, α42, α11, α50, α22, α44, α25, α37} L2 = {α21, α60, α35, α31, α47, α54, α32, α52} L3 = {α21, α58, α9, α26, α6, α5, α28, α34} L4 = {α21, α30, α57, 0, α3, α48, α39, α12} L5 = {α21, α51, α61, α27, α19, α14, α62, α2} L6 = {α21, α38, α40, α33, α46, α17, α18, α7} L7 = {α21, α29, α16, α53, α23, α0, α4, α1} L8 = {α21, α59, α15, α45, α56, α8, α55, α13} L9 = {α21, α43, α41, α20, α10, α36, α24, α49} 8.20 a. The radix-23 expansion of 47 is expressed as follows: 47 = 7 + 5 · 23. 8
  • Hence, the 23-weight of 47 W23(47) = 7 + 5 = 12. b. W23(47 (0)) = W23(47) = 7 + 5 = 12, W23(47 (1)) = W23(31) = 7 + 3 = 10, W23(47 (2)) = W23(62) = 6 + 7 = 13. Hence, max 0≤l
  • Chapter 11 Convolutional Codes 11.1 (a) The encoder diagram is shown below. u v( 1 ) v( 2 ) v ( 0 ) (b) The generator matrix is given by G = 2 6664 111 101 011 111 101 011 111 101 011 . . . . . . 3 7775 : (c) The codeword corresponding to u = (11101) is given by v = u �G = (111; 010; 001; 110; 100; 101; 011): 11.2 (a) The generator sequences of the convolutional encoder in Figure 11.3 on page 460 are given in (11.21). 1
  • 2 (b) The generator matrix is given by G = 2 6666666664 1111 0000 0000 0101 0110 0000 0011 0100 0011 1111 0000 0000 0101 0110 0000 0011 0100 0011 . . . . . . 3 7777777775 : (c) The codeword corresponding to u = (110; 011; 101) is given by v = u �G = (1010; 0000; 1110; 0111; 0011): 11.3 (a) The generator matrix is given by G(D) = � 1 + D 1 + D2 1 + D + D2 � : (b) The output sequences corresponding to u(D) = 1 + D2 + D3 + D4 are V(D) = h v(0)(D);v(1)(D);v(2)(D) i = � 1 + D + D2 + D5; 1 + D3 + D5 + D6; 1 + D + D4 + D6 � ; and the corresponding codeword is v(D) = v(0)(D3) + Dv(1)(D3) + D2v(2)(D3) = 1 + D + D2 + D3 + D5 + D6 + D10 + D14 + D15 + D16 + D19 + D20: 11.4 (a) The generator matrix is given by G(D) = � 1 + D D 1 + D D 1 1 � and the composite generator polynomials are g1(D) = g (0) 1 (D 3) + Dg(1)1 (D 3) + D2g(2)1 (D 3) = 1 + D2 + D3 + D4 + D5 and g2(D) = g (0) 2 (D 3) + Dg(1)2 (D 3) + D2g(2)2 (D 3) = D + D2 + D3: (b) The codeword corresponding to the set of input sequences U(D) = � 1 + D + D3; 1 + D2 + D3 � is v(D) = u(1)(D3)g1(D) + u(2)(D3)g2(D) = 1 + D + D3 + D4 + D6 + D10 + D13 + D14:
  • 3 11.5 (a) The generator matrix is given by G = 2 6664 111 001 010 010 001 011 111 001 010 010 001 011 111 001 010 010 001 011 . . . . . . 3 7775 : (b) The parity sequences corresponding to u = (1101) are given by v(1)(D) = u(D) � g(1)(D) = (1 + D + D3)(1 + D2 + D3 + D5) = 1 + D + D2 + D3 + D4 + D8; and v(2)(D) = u(D) � g(2)(D) = (1 + D + D3)(1 + D + D4 + D5) = 1 + D2 + D3 + D6 + D7 + D8: Hence, v(1) = (111110001) v(2) = (101100111): 11.6 (a) The controller canonical form encoder realization, requiring 6 delay elements, is shown below. v( 0 ) v( 1 ) v u( 1 ) u( 2 ) ( 2 )
  • 4 (b) The observer canonical form encoder realization, requiring only 3 delay elements, is shown below. v(2) v(1) v(0) u(2) u(1) 11.14 (a) The GCD of the generator polynomials is 1. (b) Since the GCD is 1, the inverse transfer function matrix G−1(D) must satisfy G(D)G−1(D) = � 1 + D2 1 + D + D2 � G−1(D) = I: By inspection, G−1(D) = � 1 + D D � : 11.15 (a) The GCD of the generator polynomials is 1 + D2 and a feedforward inverse does not exist. (b) The encoder state diagram is shown below. S2 S5 1/10 0/00 S7 S6 S3 S0 S4 S1 0/01 0/10 1/10 0/11 0/011/11 0/00 1/00 1/11 1/01 1/00 1/01 0/10 0/11 (c) The cycles S2S5S2 and S7S7 both have zero output weight. (d) The in�nite-weight information sequence u(D) = 1 1 + D2 = 1 + D2 + D4 + D6 + D8 + � � �
  • 5 results in the output sequences v(0)(D) = u(D) ( 1 + D2 � = 1 v(1)(D) = u(D) ( 1 + D + D2 + D3 � = 1 + D; and hence a codeword of �nite weight. (e) This is a catastrophic encoder realization. 11.16 For a systematic (n; k; �) encoder, the generator matrix G(D)is a k � n matrix of the form G(D) = [IkjP(D)] = 2 66664 1 0 � � � 0 g(k)1 (D) � � � g(n−1)1 (D) 0 1 � � � 0 g(k)2 (D) � � � g(n−1)2 (D) ... ... ... 0 0 � � � 1 g(k)k (D) � � � g(n−1)k (D) 3 77775 : The transfer function matrix of a feedforward inverse G−1(D) with delay l = 0 must be such that G(D)G−1(D) = Ik: A matrix satisfying this condition is given by G−1(D) = � Ik 0(n−k)�k � = 2 66666666664 1 0 � � � 0 0 1 � � � 0 ... ... ... 0 0 � � � 1 0 0 � � � 0 ... ... ... 0 0 � � � 0 3 77777777775 :
  • 6 11.19 (a) The encoder state diagram is shown below. S2 S1 S3 S0 0/000 1/001 1/1110/011 1/100 0/110 1/010 0/101 (b) The modi�ed state diagram is shown below. S3 X X X S1 S2 S0S0 X2 X2X3 X2 (c) The WEF function is given by A(X) = X i Fi�i � :
  • 7 There are 3 cycles in the graph: Cycle 1: S1S2S1 C1 = X3 Cycle 2: S1S3S2S1 C2 = X4 Cycle 3: S3S3 C3 = X . There is one pair of nontouching cycles: Cycle pair 1: (Cycle 1, Cycle 3) C1C3 = X4. There are no more sets of nontouching cycles. Therefore, � = 1− X i Ci + X i0;j0 Ci0Cj0 = 1− (X + X3 + X4) + X4 = 1−X −X3: There are 2 forward paths: Forward path 1: S0S1S2S0 F1 = X7 Forward path 2: S0S1S3S2S0 F2 = X8. Only cycle 3 does not touch forward path 1, and hence �1 = 1−X: Forward path 2 touches all the cycles, and hence �2 = 1: Finally, the WEF is given by A(X) = X7(1 −X) + X8 1−X −X3 = X7 1−X −X3 : Carrying out the division, A(X) = X7 + X8 + X9 + 2X10 + � � � ; indicating that there is one codeword of weight 7, one codeword of weight 8, one codeword of weight 9, 2 codewords of weight 10, and so on. (d) The augmented state diagram is shown below. (e) The IOWEF is given by A(W; X; L) = X i Fi�i � : There are 3 cycles in the graph: Cycle 1: S1S2S1 C1 = WX3L2 Cycle 2: S1S3S2S1 C2 = W 2X4L3 Cycle 3: S3S3 C3 = WXL.
  • 8 S1 S2S0 S0 W X3L X2L X2L X2L S3 WXL WXL WXL There is one pair of nontouching cycles: Cycle pair 1: (Cycle 1, Cycle 3) C1C3 = W 2X4L3. There are no more sets of nontouching cycles. Therefore, � = 1− X i Ci + X i0;j0 Ci0Cj0 = 1−WX3L2 + W 2X4L3 + WXL + W 2X4L3: There are 2 forward paths: Forward path 1: S0S1S2S0 F1 = WX7L3 Forward path 2: S0S1S3S2S0 F2 = W 2X8L4. Only cycle 3 does not touch forward path 1, and hence �1 = 1−WXL: Forward path 2 touches all the cycles, and hence �2 = 1: Finally, the IOWEF is given by A(W; X; L) = WX7L3(1 −WXL) + W 2X8L4 1− (WX3L2 + W 2X4L3 + WXL) + X4Y 2Z3 = WX7L3 1−WXL−WX3L2 : Carrying out the division, A(W; X; L) = WX7L3 + W 2X8L4 + W 3X9L5 + � � � ; indicating that there is one codeword of weight 7 with an information weight of 1 and length 3, one codeword of weight 8 with an information weight of 2 and length 4, and one codeword of weight 9 with an information weight of 3 and length 5.
  • 9 11.20 Using state variable method described on pp. 505-506, the WEF is given by A(X)= −X4(−1−2X4+X3+9X20−42X18+78X16+38X12+3X17+5X13+9X11−9X15−2X7−74X14−14X10−9X9+X6+6X8) 1−X+2X4−X3−X2+3X24+X21−3X20+32X18−8X22−X19−45X16−8X12−5X17−6X11+9X15+2X7+27X14+3X10−2X9−4X6+X8 : Performing the division results in A(X) = X4 + X5 + 2X6 + 3X7 + 6X8 + 9X9 + � � � ; which indicates that there is one codeword of weight 4, one codeword of weight 5, two codewords of weight 6, and so on. 11.28 (a) From Problem 11.19(c), the WEF of the code is A(X) = X7 + X8 + X9 + 2X10 + � � � ; and the free distance of the code is therefore dfree = 7, the lowest power of X in A(X). (b) The complete CDF is shown below. 0 1 2 3 4 1 2 3 4 5 6 7 5 6 7 d dmin = 5 dfree =7 (c) The minimum distance is dmin = dljl=m=2 = 5:
  • 10 11.29 (a) By examining the encoder state diagram in Problem 11.15 and considering only paths that begin and end in state S0 (see page 507), we �nd that the free distance of the code is dfree = 6. This corresponds to the path S0S1S2S4S0 and the input sequence u = (1000). (b) The complete CDF is shown below. 0 1 2 3 4 1 2 3 4 5 6 d dfree = 6 dmin = 3 (c) The minimum distance is dmin = dljl=m=3 = 3: 11.31 By de�nition, the free distance dfree is the minimum weight path that has diverged from and remerged with the all-zero state. Assume that [v]j represents the shortest remerged path through the state diagram with weight free dfree. Letting [dl]re be the minimum weight of all remerged paths of length l, it follows that [dl]re = dfree for all l � j. Also, for a noncatastrophic encoder, any path that remains unmerged must accumulate weight. Letting [dl]un be the minimum weight of all unmerged paths of length l, it follows that lim l!1 [dl]un !1: Therefore lim l!1 dl = min � lim l!1 [dl]re; lim l!1 [dl]un � = dfree: Q. E. D.
  • Chapter 12 Optimum Decoding of Convolutional Codes 12.1 (Note: The problem should read \ for the (3,2,2) encoder in Example 11.2 "rather than \ for the (3,2,2) code in Table 12.1(d)".) The state diagram of the encoder is given by: 01/111 11/101 00/000 01/011 00/100 10/101 S3 S1S2 S0 00 /0 11 11 /1 10 00/111 10/010 10/001 01/100 11/001 10/110 11/010 01/000 1
  • 2 From the state diagram, we can draw a trellis diagram containing h + m + 1 = 3 + 1 + 1 = 5 levels as shown below: 00/000 10 /10 101 /0 1111 /1 10 10/010 00/111 01 /1 0011 /0 01 01/111 10/001 11 /0 10 00/100 11/101 01/000 10/110 00/011 S3 S1 S2 S0S0 00/000 10 /10 101 /0 1111 /1 10 10/010 00/111 01 /1 0011 /0 01 01/111 10/001 11 /0 10 00/100 11/101 01/000 10/110 S0 S3 S1 S2 00/011 S0 S3 S1 S2 00/000 10 /10 101 /0 11 11 /1 10 00/000 00/111 00/100 00/011 S0 Hence, for u = (11; 01; 10), v(0) = (1001) v(1) = (1001) v(2) = (0011) and v = (110; 000; 001; 111); agreeing with (11.16) in Example 11.2. The path through the trellis corresponding to this codeword is shown highlighted in the �gure.
  • 3 12.2 Note that N−1X l=0 c2 [log P (rljvl) + c1] = N−1X l=0 [c2 log P (rljvl) + c2c1] = c2 N−1X l=0 log P (rljvl) + Nc2c1: Since max v ( c2 N−1X l=0 log P (rljvl) + Nc2c1 ) = c2 max v ( N−1X l=0 log P (rljvl) ) + Nc2c1 if C2 is positive, any path that maximizes PN−1 l=0 log P (rljvl) also maximizes PN−1 l=0 c2[log P (rljvl) + c1]. 12.3 The integer metric table becomes: 01 02 12 11 0 6 5 3 0 1 0 3 5 6 The received sequence is r = (111201; 111102; 111101; 111111; 011201; 120211; 120111). The decoded se- quence is shown in the �gure below, and the �nal survivor is v^ = (111; 010; 110; 011; 000; 000; 000); which yields a decoded information sequence of u^ = (11000): This result agrees with Example 12.1.
  • 4 0/000 1/ 11 1 0/000 0/000 0/000 0/000 0/000 0/000 1/ 11 1 1/ 11 1 1/ 11 1 1/ 11 1 0/101 1/ 01 0 1/ 01 0 1/ 01 0 1/ 01 0 0/101 0/101 0/101 0/101 1/001 0/110 1/001 1/001 0/110 0/110 0/110 1/ 10 0 0/011 1/ 10 0 1/ 10 0 0/011 0/011 0/011 0/011 S3 S1 S0S0 S0 S0 S0 S0 S0 S1 S1 S1 S1 S2 S2 S2 S3 S2 S3 S3 S2 S0 0 9 13 26 52 67 75 84 11 24 32 46 57 22 36 42 63 20 40 48 53 73
  • 5 12.4 For the given channel transition probabilities, the resulting metric table is: 01 02 03 04 14 13 12 11 0 −0:363 −0:706 −0:777 −0:955 −1:237 −1:638 −2:097 −2:699 1 −2:699 −2:097 −1:638 −1:237 −0:955 −0:777 −0:706 −0:363 To construct an integer metric table, choose c1 = 2:699 and c2 = 4:28. Then the integer metric table becomes: 01 02 03 04 14 13 12 11 0 10 9 8 7 6 5 3 0 1 0 3 5 6 7 8 9 10 12.5 (a) Referring to the state diagram of Figure 11.13(a), the trellis diagram for an information sequence of length h = 4 is shown in the �gure below. (b) After Viterbi decoding the �nal survivor is v^ = (11; 10; 01; 00; 11; 00): This corresponds to the information sequence u^ = (1110):
  • 6 0/00 0 0/00 0/00 0/00 0/00 0/00 0/00 0/ 00 0/ 00 0/ 00 1/ 10 1/ 10 1/ 10 S1 S0 0/ 00 0/01 S0 S0 S0 S0 S0 S4 S4 S4 S4 S2 S2S2S2 S6 S6 S6 S5 S5 S1 S1 S1 0/01 0/01 0/01 S3 S3 S3 3 19 12 21 42 22 38 16 1/ 00 0/11 0/11 0/11 0/11 1/ 00 1/ 01 1/ 01 0/10 0/10 0/10 46 51 S7 S7 20 48 1/10 1/ 01 0/01 0/01 0/10 0/10 40 61 1/ 00 1/ 11 0/11 0/00 0/00 0/00 53 71 66 20 45 79 0/11 0/11 0/11 27 68 83 94 34 49 80 98 115 S0 S0
  • 7 12.6 Combining the soft decision outputs yields the following transition probabilities: 0 1 0 0:909 0:091 1 0:091 0:909 For hard decision decoding, the metric is simply Hamming distance. For the received sequence r = (11; 10; 00; 01; 10; 01; 00); the decoding trellis is as shown in the �gure below, and the �nal survivor is v^ = (11; 10; 01; 01; 00; 11; 00); which corresponds to the information sequence u^ = (1110): This result matches the result obtained using soft decisions in Problem 12.5.
  • 8 0/00 0 0/00 0/00 0/00 0/00 0/00 0/00 0/ 00 0/ 00 0/ 00 1/ 10 1/ 10 1/ 10 S1 S0 0/ 00 0/01 S0 S0 S0 S0 S0 S4 S4 S4 S4 S2 S2S2S2 S6 S6 S6 S5 S5 S1 S1 S1 0/01 0/01 0/01 S3 S3 S3 2 0 3 5 4 2 0 3 1/ 00 0/11 0/11 0/11 0/11 1/ 00 1/ 01 1/ 01 0/10 0/10 0/10 1 3 S7 S7 4 2 1/10 1/ 01 0/01 0/01 0/10 0/10 2 3 1/ 00 1/ 11 0/11 0/00 0/00 0/00 1 1 2 4 4 3 0/11 0/11 0/11 4 2 2 3 3 4 3 3 3 S0 S0
  • 9 12.9 Proof: For d even, Pd = 1 2 � d d=2 � pd=2(1 − p)d=2 + dX e=(d=2)+1 � d e � pe(1− p)d−e < dX e=(d=2) � d e � pe(1− p)d−e < dX e=(d=2) � d e � pd=2(1− p)d=2 = pd=2(1− p)d=2 dX e=(d=2) � d e � < 2dpd=2(1− p)d=2 and thus (12.21) is an upper bound on Pd for d even. Q. E. D. 12.10 The event error probability is bounded by (12.25) P (E) < 1X d=dfree AdPd < A(X)jX=2pp(1−p) : From Example 11.12, A(X) = X6 + X7 −X8 1− 2X −X3 = X 6 + 3X7 + 5X8 + 11X9 + 25X10 + � � � ; which yields (a) P (E) < 1:2118� 10−4 for p = 0:01, (b) P (E) < 7:7391� 10−8 for p = 0:001. The bit error probability is bounded by (12.29) Pb(E) < 1X d=dfree BdPd < B(X)jX=2pp(1−p) = 1 k @A(W; X) @W ���� X=2 p p(1−p);W=1 : From Example 11.12, A(W; X) = WX7 + W 2(X6 −X8) 1−W (2X + X3) = WX 7+W 2 ( X6 + X8 + X10 � +W 3 ( 2X7 + 3X9 + 3X11 + X13 � +� � � : Hence, @A(W; X) @W = X7 + 2W (X6 − 3X8 −X10)− 3W 2(2X7 −X9 −X11) (1− 2WX −WX3)2 + � � � and @A(W; X) @W ���� W=1 = 2X6 −X7 − 2X8 + X9 + X11 (1− 2X −X3)2 = 2X 6 + 7X7 + 18X8 + � � � :
  • 10 This yields (a) Pb(E) < 3:0435� 10−4 for p = 0:01, (b) Pb(E) < 1:6139� 10−7 for p = 0:001. 12.11 The event error probability is given by (12.26) P (E) � Adfree h 2 p p(1− p) idfree � Adfree2dfreepdfree=2 and the bit error probability (12.30) is given by Pb(E) � Bdfree h 2 p p(1− p) idfree � Bdfree2dfreepdfree=2: From Problem 12.10, dfree = 6; Adfree = 1; Bdfree = 2: (a) For p = 0:01, P (E) � 1 � 26 � (0:01)6=2 = 6:4� 10−5 Pb(E) � 2 � 26 � (0:01)6=2 = 1:28� 10−4: (b) For p = 0:001, P (E) � 1 � 26 � (0:001)6=2 = 6:4� 10−8 Pb(E) � 2 � 26 � (0:001)6=2 = 1:28� 10−7: 12.12 The (3; 1; 2) encoder of (12.1) has dfree = 7 and Bdfree = 1. Thus, expression (12.36) becomes Pb(E) � Bdfree2dfree=2e−(Rdfree=2)�(Eb=No) = 2(7=2)e−(7=6)�(Eb=No) and (12.37) remains Pb(E) � 12e −Eb=No : These expressions are plotted versus Eb=No in the �gure below.
  • 11 0 5 10 15 10-16 10-14 10-12 10-10 10-8 10-6 10-4 10-2 100 102 Eb/No (dB) P b (E ) 12.36 (Coded) 12.37 (Uncoded)
  • 12 Equating the above expressions and solving for Eb=No yields 2(7=2)e−(7=6)�(Eb=No) = 1 2 e−Eb=No 1 = 2(9=2)e−(1=6)(Eb=No) e(−1=6)(Eb=No) = 2−(9=2) (−1=6)(Eb=No) = ln(2−(9=2)) Eb=No = −6 ln(2−(9=2)) = 18:71; which is Eb=No = 12:72dB, the coding threshold. The coding gain as a function of Pb(E) is plotted below. 10-10 10-9 10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 -6 -4 -2 0 2 4 6 8 10 12 14 Pb E b /N o (dB ) Reguired SNR, Coded Required SNR, Uncoded Gain ( E ) Note that in this example, a short constraint length code (� = 2) with hard decision decoding, the approximate expressions for Pb(E) indicate that a positive coding gain is only achieved at very small values of Pb(E), and the asymptatic coding gain is only 0.7dB. 12.13 The (3; 1; 2) encoder of Problem 12.1 has dfree = 7 and Bdfree = 1. Thus, expression (12.46) for the unquantized AWGN channel becomes Pb(E) � Bdfreee−RdfreeEb=No = e−(7=3)�(Eb=No) and (12.37) remains Pb(E) � 12e −Eb=No : These expressions are plotted versus Eb=No in the �gure below.
  • 13 0 5 10 15 10-35 10-30 10-25 10-20 10-15 10-10 10-5 100 Eb/No (dB) P b (E ) 12.46 (Coded) 12.37 (Uncoded) Equating the above expressions and solving for Eb=No yields e−(7=3)�(Eb=No) = 1 2 e−Eb=No 1 = 1 2 e(4=3)(Eb=No) e(4=3)(Eb=No) = 2 (4=3)(Eb=No) = ln(2) Eb=No = (3=4) ln(2) = 0:5199; which is Eb=No = −2:84dB, the coding threshold. (Note: If the slightly tighter bound on Q(x) from (1.5) is used to form the approximate expressin for Pb(E), the coding threshold actually moves to −1 dB. But this is just an artifact of the bounds, which are not tight for small values of Eb=No.) The coding gain as a function of Pb(E) is plotted below. Note that in this example, a short constraint length code (� = 2) with soft decision decoding, the approximate expressions for Pb(E) indicate that a coding gain above 3.0 dB is achieved at moderate values of Pb(E), and the asymptotic coding gain is 3.7 dB.
  • 14 10-10 10-9 10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 -2 0 2 4 6 8 10 12 14 Pb E b /N o (dB ) Reguired SNR, Coded Required SNR, Uncoded Gain ( E )
  • 15 12.14 The IOWEF function of the (3; 1; 2) encoder of (12.1) is A(W; X) = WX7 1−WX −WX3 and thus (12.39b) becomes Pb(E) < B(X)jX=Do = 1 k @A(W; X) @W ���� X=D0;W=1 = X7 (1−WX −WX3)2 ���� X=D0;W=1 : For the DMC of Problem 12.4, D0 = 0:42275 and the above expression becomes Pb(E) < 9:5874� 10−3: If the DMC is converted to a BSC, then the resulting crossover probability is p = 0:091. Using (12.29) yields Pb(E) < B(X)jX=Do = 1 k @A(W; X) @W ���� X=2 p p(1−p);W=1 = X7 (1 −WX −WX3)2 ���� X=2 p p(1−p);W=1 = 3:7096�10−1; about a factor of 40 larger than the soft decision case. 12.16 For the optimum (2; 1; 7) encoder in Table 12.1(c), dfree = 10, Adfree = 1, and Bdfree = 2. (a) From Table 12.1(c) γ = 6:99dB: (b) Using (12.26) yields P (E) � Adfree2dfreepdfree=2 = 1:02� 10−7: (c) Using (12.30) yields Pb(E) � Bdfree2dfreepdfree=2 = 2:04� 10−7: (d) For this encoder G−1 = � D2 1 + D+D2 � and the ampli�cation factor is A = 4. For the quick-look-in (2; 1; 7) encoder in Table 12.2, dfree = 9, Adfree = 1, and Bdfree = 1. (a) From Table 12.2 γ = 6:53dB: (b) Using (12.26) yields P (E) � Adfree2dfreepdfree=2 = 5:12� 10−7: (c) Using (12.30) yields Pb(E) � Bdfree2dfreepdfree=2 = 5:12� 10−7:
  • 16 (d) For this encoder G−1 = � 1 1 � and the ampli�cation factor is A = 2. 12.17 The generator matrix of a rate R = 1=2 systematic feedforward encoder is of the form G = h 1 g(1)(D) i : Letting g(1)(D) = 1 + D + D2 + D5 + D7 achieves dfree = 6 with Bdfree = 1 and Adfree = 1. (a) The soft-decision asymptotic coding gain is γ = 4:77dB: (b) Using (12.26) yields P (E) � Adfree2dfreepdfree=2 = 6:4� 10−5: (c) Using (12.30) yields Pb(E) � Bdfree2dfreepdfree=2 = 6:4� 10−5: (d) For this encoder (and all systematic encoders) G−1 = � 1 0 � and the ampli�cation factor is A = 1. 12.18 The generator polynomial for the (15; 7) BCH code is g(X) = 1 + X4 + X6 + X7 + X8 and dg = 5. The generator polynomial of the dual code is h(X) = X15 + 1 X8 + X7 + X6 + X4 + 1 = X7 + X6 + X4 + 1 and hence dh � 4. (a) The rate R = 1=2 code with composite generator polynomial g(D) = 1 + D4 + D6 + D7 + D8 has generator matrix G(D) = � 1 + D2 + D3 + D4 D3 � and dfree � min(5; 8) = 5. (b) The rate R = 1=4 code with composite generator polynomial g(D) = g(D2)+Dh(D2) = 1+D+ D8 + D9 + D12 + D13 + D14 + D15 + D16 has generator matrix G(D) = � 1 + D2 + D3 + D4 1 + D2 + D3 D3 D3 � and dfree � min(dg + dh; 3dg; 3dh) = min(9; 15; 12) = 9.
  • 17 The generator polynomial for the (31; 16) BCH code is g(X) = 1 + X + X2 + X3 + X5 + X7 + X8 + X9 + X10 + X11 + X15 and dg = 7. The generator polynomial of the dual code is h(X) = X15 + 1 g(X) = X16 + X12 + X11 + X10 + X9 + X4 + X + 1 and hence dh � 6. (a) The rate R = 1=2 code with composite generator polynomial g(D) = 1 + D + D2 + D3 + D5 + D7 + D8 + D9 + D10 + D11 + D15 has generator matrix G(D) = � 1 + D + D4 + D5 1 + D + D2 + D3 + D4 + D5 + D7 � and dfree � min(7; 12) = 7. (b) The rate R = 1=4 code with composite generator polynomial g(D) = g(D2)+Dh(D2) = 1+D+ D2 +D3 +D4 +D6 +D9 +D10 +D14 +D16 +D18 +D19 +D20 +D21 +D22 +D23 has generator matrix G(D) = � 1 + D + D4 + D5 1 + D2 + D5 + D6 + D8 1 + D + D2 + D3 + D4 + D5 + D7 1 + D5 � and dfree � min(dg + dh; 3dg; 3dh) = min(13; 21; 18) = 13. 12.20 (a) The augmented state diagram is shown below. S1 S2 S3 S0 S0 WX 3L X2L X2LWXL WXL X2L WXL The generating function is given by A(W; X; L) = X i Fi�i � : There are 3 cycles in the graph:
  • 18 Cycle 1: S1S2S1 C1 = WX3L2 Cycle 2: S1S3S2S1 C2 = W 2X4L3 Cycle 3: S3S3 C3 = WXL. There is one pair of nontouching cycles: Cycle pair 1: (loop 1, loop 3) C1C3 = W 2X4L3. There are no more sets of nontouching cycles. Therefore, � = 1− X i Ci + X i0;j0 Ci0Cj0 = 1− (WX3L2 + W 2X4L3 + WXL) + W 2X4L3: There are 2 forward paths: Forward path 1: S0S1S2S0 F1 = WX7L3 Forward path 2: S0S1S3S2S0 F2 = W 2X8L4. Only cycle 3 does not touch forward path 1, and hence �1 = 1−WXL: Forward path 2 touches all the cycles, and hence �2 = 1: Finally, the WEF A(W; X; L) is given by A(W; X; L) = WX7L3(1−WXL) + W 2X8L4 1− (WX3L2 + W 2X4L3 + WXL) + W 2X4L3 = WX7L3 1−WXL−WX3L2 and the generating WEF's Ai(W; X; L) are given by: A1(W; X; L) = WX3L(1−WXL) � = WX3L(1−WXL) 1−WXL−WX3L2 = W X3 L + W 2 X6 L3 + W 3 X7 L4 + (W 3X9 + W 4X8 )L5 + (2 W 4; X10 + W 5X9 )L6 + � � � A2(W; X; L) = WX5L2(1−WXL) + W 2X6L3 � = WX5L2 1−WXL−WX3L2 = W X5 L2 + W 2 X6 L3 + (W 2 X8 + W 3 X7)L4 + (2 W 3 X9 + W 4 X8)L5 +(W 3 X11 + 3 W 4 X10 + W 5 X9)L6 + � � � A3(W; X; L) = W 2X4L2 � = W 2X4L2 1−WXL−WX3L2 = W 2 X4 L2 + W 3 X5 L3 + (W 3 X7 + W 4 X6)L4 + (2 W 4 X8 + W 5 X7)L5 +(W 4 X10 + 3 W 5 X9 + W 6 X8)L6 � � � (b) This code has dfree = 7, so τmin is the minimum value of τ for which d(τ ) = dfree + 1 = 8. Examining the series expansions of A1(W; X; L), A2(W; X; L); and A3(W; X; L) above yields τmin = 5.
  • 19 (c) A table of d(τ ) and Ad(τ ) is given below. τ d(τ ) Ad(τ ) 0 3 1 1 4 1 2 5 1 3 6 1 4 7 1 5 8 1 (d) From part (c) and by looking at the series expansion of A3(W; X; L), it can be seen that lim τ!1 d(τ ) = τ + 3: 12.21 For a BSC, the trellis diagram of Figure 12.6 in the book may be used to decode the three possible 21- bit subequences using the Hamming metric. The results are shown in the three �gures below. Since the r used in the middle �gure (b) below has the smallest Hamming distance (2) of the three subsequences, it is the most likely to be correctly synchronized. 0/000 1/ 11 1 0/000 0/000 0/000 0/000 0/000 0/000 1/ 11 1 1/ 11 1 1/ 11 1 1/ 11 1 0/101 1/ 01 0 1/ 01 0 1/ 01 0 1/ 01 0 0/101 0/101 0/101 0/101 1/001 0/110 1/001 1/001 0/110 0/110 0/110 1/ 10 0 0/011 1/ 10 0 1/ 10 0 0/011 0/011 0/011 0/011 S3 S1 S0S0 S0 S0 S0 S0 S0 S1 S1 S1 S1 S2 S2 S3 S2 S3 S3 S2 S0 0 2 3 4 4 6 4 6 1 4 3 5 5 3 5 3 6 2 3 6 3 7S2 r 011 100 110 010 110 010 001 (a)
  • 20 0/000 1/ 11 1 0/000 0/000 0/000 0/000 0/000 0/000 1/ 11 1 1/ 11 1 1/ 11 1 1/ 11 1 0/101 1/ 01 0 1/ 01 0 1/ 01 0 1/ 01 0 0/101 0/101 0/101 0/101 1/001 0/110 1/001 1/001 0/110 0/110 0/110 1/ 10 0 0/011 1/ 10 0 1/ 10 0 0/011 0/011 0/011 0/011 S3 S1 S0S0 S0 S0 S0 S0 S0 S1 S1 S1 S1 S2 S2 S3 S2 S3 S3 S2 S0 0 3 4 4 5 4 5 2 0 5 1 4 1 2 4 4 6 1 3 1 5 2S2 r 111 001 100 101 100 100 011 (b)
  • 21 0/000 1/ 11 1 0/000 0/000 0/000 0/000 0/000 0/000 1/ 11 1 1/ 11 1 1/ 11 1 1/ 11 1 0/101 1/ 01 0 1/ 01 0 1/ 01 0 1/ 01 0 0/101 0/101 0/101 0/101 1/001 0/110 1/001 1/001 0/110 0/110 0/110 1/ 10 0 0/011 1/ 10 0 1/ 10 0 0/011 0/011 0/011 0/011 S3 S1 S0S0 S0 S0 S0 S0 S0 S1 S1 S1 S1 S2 S2 S3 S2 S3 S3 S2 S0 0 2 4 4 4 5 5 6 1 3 5 5 6 2 2 3 3 3 4 4 6 5S2 r 110 011 001 011 001 000 111 (c)
  • Chapter 13 Suboptimum Decoding of Convolutional Codes 13.1 (a) Referring to the state diagram of Figure 11.13a, the code tree for an information sequence of length h = 4 is shown below. (b) For u = (1001), the corresponding codeword is v = (11, 01, 11, 00, 01, 11, 11), as shown below. 11 10 01 00 11 00 01 10 10 01 11 00 00 11 10 01 00 11 01 00 11 10 11 00 10 11 00 00 11 11 01 10 10 11 11 11 00 00 11 110100 11 00 00 00 01 00 1101 00 11 1010 10 11 00 00 11 1100 11 10 11 11 11 00 00 10 01 00 00 00 00 11 11 1101 1
  • 2 13.2 Proof: A binary-input, Q-ary-output DMC is symmetric if P (r` = j|0) = P (r` = Q− 1− j|1) (a-1) for j = 0, 1, . . . , Q− 1. The output probability distribution may be computed as P (r` = j) = 1∑ i=0 P (r` = j|i)P (i), where i is the binary input to the channel. For equally likely input signals, P (1) = P (0) = 0.5 and P (r` = j) = P (r` = j|0)P (0) + P (r` = j|1)P (1) = 0.5 [P (r` = j|0) + P (r` = j|1)] = 0.5 [P (r` = Q− 1− j|0) + P (r` = Q− 1− j|1)] [using (a-1)] = P (r` = Q− 1− j). Q. E. D. 13.3 (a) Computing the Fano metric with p = 0.045 results in the metric table: 0 1 0 0.434 −3.974 1 −3.974 0.434 Dividing by 0.434 results in the following integer metric table: 0 1 0 1 −9 1 −9 1 (b) Decoding r = (11, 00, 11, 00, 01, 10, 11) using the stack algorithm results in the following eight steps: Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 Step 8 1(-2) 11(-6) 10(-6) 100(-4) 1001(-2) 10010(0) 100100(-8) 1001000(-6) 0(-18) 10(-6) 111(-14) 111(-14) 111(-14) 0(-18) 110(-14) 110(-14) 110(-14) 0(-18) 0(-18) 0(-18) 101(-24) 1000(-22) 101(-24) The decoded sequence is vˆ = (11, 01, 11, 00, 01, 11, 11) and uˆ = [1001]. The Viterbi algorithm requires fifteen steps to decode the same sequence. (c) Decoding r = (11, 10, 00, 01, 10, 01, 00) using the stack algorithm results in the following sixteen steps:
  • 3 Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 Step 8 1(-2) 11(4) 111(-4) 1110(-2) 110(-4) 11100(-10) 1100(-12) 1101(-12) 0(-18) 10(-16) 110(-4) 110(-4) 11100(-10) 1100(-12) 1101(-12) 10(-16) 0(-18) 10(-16) 10(-16) 10(-16) 1101(-12) 10(-16) 111000(-18) 0(-18) 0(-18) 0(-18) 10(-16) 111000(-18) 0(-16) 1111(-22) 1111(-22) 0(-18) 0(-18) 11000(-20) Step 9 Step 10 Step 11 Step 12 Step 13 Step 14 Step 15 Step 16 11010(-10) 10(-16) 101(-14) 1011(-12) 10110(-10) 101100(-18) 111000(-18) 1110000(-16) 10(-16) 111000(-18) 111000(-18) 111000(-18) 111000(-18) 111000(-18) 110100(-18) 110100(-18) 111000(-18) 110100(-18) 110100(-18) 110100(-18) 110100(-18) 110100(-18) 0(-18) 0(-18) 0(-18) 0(-18) 0(-18) 0(-18) 0(-18) 0(-18) 11000(-20) 11000(-20) 11000(-20) 11000(-20) 11000(-20) 11000(-20) 11000(-20) 11000(-20) 1010(-32) 1010(-32) 100(-34) 1010(-32) 1010(-32) 1010(-32) 100(-34) 100(-34) 100(-34) 100(-34) 100(-34) 1011000(-36) 1011000(-36) The decoded sequence is vˆ = (11, 10, 01, 01, 00, 11, 00) and uˆ = (1110). This agrees with the result of Problem 12.6. 13.4 (a) Computing the Fano metric using M(r`|v`) = log2 P (r`|v`) P (r`) −R with R = 1/2 and P (01) = P (11) = 0.218 P (02) = P (12) = 0.1025 P (03) = P (13) = 0.095 P (04) = P (14) = 0.0845 results in 01 02 03 04 14 13 12 11 0 0.494 0.443 0.314 −0.106 −1.043 −2.66 −4.07 −7.268 1 −7.268 −4.07 −2.66 −1.043 −0.106 0.314 0.443 0.494 Multiplying by 3/0.106 yields the following integer metric table: 01 02 03 04 14 13 12 11 0 14 12 9 −3 −30 −75 −115 −167 1 −167 −115 −75 −30 −3 9 12 14
  • 4 (b) Decoding r = (1211, 1201, 0301, 0113, 1202, 0311, 0302) using the stack algorithm results in the following thirteen steps: Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 1(26) 11(52) 110(-9) 1100(-70) 111(-106) 1110(-83) 1101(-167) 0(-282) 10(-256) 111(-106) 111(-106) 1101(-167) 11000(-173) 11100(-186) 0(-282) 10(-256) 1101(-167) 11000(-173) 11000(-173) 11100(-186) 0(-252) 10(-256) 10(-256) 10(-256) 10(-256) 0(-282) 0(-282) 0(-282) 0(-282) 1111(-348) 1111(-348) Step 8 Step 9 Step 10 Step 11 Step 12 Step 13 11010(-143) 11000(-173) 11100(-186) 110100(-204) 111000(-247) 1110000(-226) 11000(-173) 11100(-186) 110100(-204) 111000(-247) 10(-256) 11100(-186) 110100(-204) 10(-256) 10(-256) 0(-282) 10(-256) 10(-256) 0(-282) 0(-282) 110000(-348) 0(-282) 0(-282) 110000(-331) 110000(-331) 1111(-348) 1111(-348) 1111(-348) 1111(-348) 1111(-348) 1101000(-394) The decoded sequence is vˆ = (11, 10, 01, 01, 00, 11, 00) and uˆ = (1110). This agrees with the result of Problem 12.5(b). 13.6 As can be seen from the solution to Problem 13.3, the final decoded path never falls below the second stack entry (part (b)) or the fourth stack entry (part(c)). Thus, for a stacksize of 10 entries, the final decoded path is not effected in either case. 13.7 From Example 13.5, the integer metric table is 0 1 0 1 −5 1 −5 1
  • 5 (a) Decoding the received sequence r = (010, 010, 001, 110, 100, 101, 011) using the stack bucket algo- rithm with an interval of 5 results in the following decoding steps: Interval 1 Interval 2 Interval 3 Interval 4 Interval 5 Interval 6 Interval 7 9 to 5 4 to 0 -1 to -5 -6 to -10 -11 to -15 -16 to -20 -21 to -25 Step 1 0(-3) 1(-9) Step 2 00(-6) 01(-12) 1(-9) Step 3 1(-9) 000(-12) 001(-15) 01(-12) Step 4 11(-6) 000(-12) 10(-24) 001(-15) 01(-12) Step 5 111(-3) 000(-12) 110(-21) 001(-15) 10(-24) 01(-12) Step 6 1110(0) 000(-12) 1111(-18) 110(-21) 001(-15) 10(-24) 01(-12) Step 7 11101(3) 11100(-15) 1111(-18) 110(-21) 000(-12) 10(-24) 001(-15) 01(-12) Step 8 111010(6) 11100(-15) 1111(-18) 110(-21) 000(-12) 10(-24) 001(-15) 01(-12) Step 9 1110100(6) 11100(-15) 1111(-18) 110(-21) 000(-12) 10(-24) 001(-15) 01(-12) The decoded sequence is vˆ = (111, 010, 001, 110, 100, 101, 011) and uˆ = (11101), which agrees with the result of Example 13.5.
  • 6 (b) Decoding the received sequence r = (010, 010, 001, 110, 100, 101, 011) using the stack bucket algo- rithm with an interval of 9 results in the following decoding steps. Interval 1 Interval 2 Interval 3 Interval 4 8 to 0 -1 to -9 -10 to -18 -19 to -27 Step 1 0(-3) 1(-9) Step 2 00(-6) 01(-12) 1(-9) Step 3 1(-9) 000(-12) 001(-15) 01(-12) Step 4 11(-6) 000(-12) 10(-24) 001(-15) 01(-12) Step 5 111(-3) 000(-12) 110(-21) 001(-15) 10(-24) 01(-12) Step 6 1110(0) 1111(-18) 110(-21) 000(-12) 10(-24) 001(-15) 01(-12) Step 7 11101(3) 11100(-15) 110(-21) 1111(-18) 10(-24) 000(-12) 001(-15) 01(-12) Step 8 111010(6) 11100(-15) 110(-21) 1111(-18) 10(-24) 000(-12) 001(-15) 01(-12) Step 9 1110100(9) 11100(-15) 110(-21) 1111(-18) 10(-24) 000(-12) 001(-15) 01(-12) The decoded sequence is vˆ = (111, 010, 001, 110, 100, 101, 011) and uˆ = (11101), which agrees with the result of Example 13.5 and part (a).
  • 7 13.8 (i) 4 = 5 Step Look MF MB Node Metric T 0 - - - X 0 0 1 LFB -3 −∞ X 0 -5 2 LFB -3 - 0 -3 -5 3 LFB -6 0 X 0 -5 4 LFNB -9 −∞ X 0 -10 5 LFB -3 - 0 -3 -10 6 LFB -6 - 00 -6 -10 7 LFB -9 - 000 -9 -10 8 LFB -12 -9 00 -6 -10 9 LFNB -15 -3 0 -3 -10 10 LFNB -12 0 X 0 -10 11 LFNB -9 - 1 -9 -10 12 LFB -6 - 11 -6 -10 13 LFB -3 - 111 -3 -5 14 LFB 0 - 1110 0 0 15 LFB 3 - 11101 3 0 16 LFB 6 - 111010 6 5 17 LFB 9 - 1110100 9 Stop The decoded sequence is vˆ = (111, 010, 001, 110, 100, 101, 011) and uˆ = [11101] which agrees with the result of Example 13.5. (ii) 4 = 10 Step Look MF MB Node Metric T 0 - - - X 0 0 1 LFB -3 −∞ X 0 -10 2 LFB -3 - 0 -3 -10 3 LFB -6 - 00 -6 -10 4 LFB -9 - 000 -9 -10 5 LFB -12 -6 00 -6 -10 6 LFNB -15 -3 0 -3 -10 7 LFNB -12 0 X 0 -10* 8 LFNB -9 - 1 -9 -10 9 LFB -6 - 11 -6 -10 10 LFB -3 - 111 -3 -10 11 LFB 0 - 1110 0 0 12 LFB 3 - 11101 3 0 13 LFB 6 - 111010 6 0 14 LFB 9 - 1110100 9 Stop * Not the first visit. Therefore no tightening is done. Note: The final decoded path agrees with the results of Examples 13.7 and 13.8 and Problem 13.7. The number of computations in both cases is reduced compared to Examples 13.7 and 13.8.
  • 8 13.20 (a) g(1)(D) = 1 +D +D3 +D5 +D8 +D9 +D10 +D11 Therefore: H =  1 1 1 0 1 1 0 0 1 0 1 1 1 0 0 0 1 0 1 1 0 0 1 0 0 0 1 0 . . . 1 0 0 0 1 0 0 0 . . . 0 0 1 0 0 0 1 0 . . . 0 0 0 0 1 0 0 0 . . . 1 0 0 0 0 0 1 0 . . . 1 0 1 0 0 0 0 0 . . . 1 0 1 0 1 0 0 0 1 1 1 0 1 0 1 0 1 0 . . . . . . . . . . . . . . . . . . 1 0 1 1  (b) Using the parity triangle of the code, we obtain: s0 = e (0) 0 +e (1) 0 s1 = e (0) 0 +e (0) 1 +e (1) 1 s2 = +e (0) 1 +e (0) 2 +e (1) 2 s3 = e (0) 0 +e (0) 2 +e (0) 3 +e (1) 3 s4 = +e (0) 1 +e (0) 3 +e (0) 4 +e (1) 4 s5 = e (0) 0 +e (0) 2 +e (0) 4 +e (0) 5 +e (1) 5 s6 = +e (0) 1 +e (0) 3 +e (0) 5 +e (0) 6 +e (1) 6 s7 = +e (0) 2 +e (0) 4 +e (0) 6 +e (0) 7 +e (1) 7 s8 = e (0) 0 +e (0) 3 +e (0) 5 +e (0) 7 +e (0) 8 +e (1) 8 s9 = e (0) 0 +e (0) 1 +e (0) 4 +e (0) 6 +e (0) 8 +e (0) 9 +e (1) 9 s10 = e (0) 0 +e (0) 1 +e (0) 2 +e (0) 5 +e (0) 7 +e (0) 9 +e (0) 10 +e (1) 10 s11 = e (0) 0 +e (0) 1 +e (0) 2 +e (0) 3 +e (0) 6 +e (0) 8 +e (0) 10 +e (0) 11 +e (1) 11
  • 9 (c) The effect of error bits prior to time unit ` are removed. Thus, the modified syndrome bits are given by the following equations: s′` = e (0) ` s′`+1 = e (0) ` +e (0) `+1 +e (1) `+1 s′`+2 = +e (0) `+1 +e (0) `+2 +e (1) `+2 s′`+3 = e (0) ` +e (0) `+2 +e (0) `+3 +e (1) `+3 s′`+4 = +e (0) `+1 +e (0) `+3 +e (0) `+4 +e (1) `+4 s′`+5 = e (0) ` +e (0) `+2 +e (0) `+4 +e (0) `+5 +e (1) `+5 s′`+6 = +e (0) `+1 +e (0) `+3 +e (0) `+5 +e (0) `+6 +e (1) `+6 s′`+7 = +e (0) `+2 +e (0) `+4 +e (0) `+6 +e (0) `+7 +e (1) `+7 s′`+8 = e (0) ` +e (0) `+3 +e (0) `+5 +e (0) `+7 +e (0) `+8 +e (1) `+8 s′`+9 = e (0) ` +e (0) `+1 +e (0) `+4 +e (0) `+6 +e (0) `+8 +e (0) `+9 +e (1) `+9 s′`+10 = e (0) ` +e (0) `+1 +e (0) `+2 +e (0) `+5 +e (0) `+7 +e (0) `+9 +e (0) `+10 +e (1) `+10 s′`+11 = e (0) ` +e (0) `+1 +e (0) `+2 +e (0) `+3 +e (0) `+6 +e (0) `+8 +e (0) `+10 +e (0) `+11 +e (1) `+11 13.28 (a) Using either of the methods described in Examples 13.15 and 13.16, we arrive at the conclusion that dmin = 7. (b) From the parity triangle, we can see that this code is not self-orthogonal (for instance, s3 and s5 both check information error bit e(0)2 ). (c) From the parity triangle shown below, we see that the maximum number of orthogonal parity checks that can be formed on e(0)0 is 4: {s0, s1, s2 + s11, s5}. 1 1 0 1 0 1 0 0 1 1 1 1 1 1 0 1 0 1 0 0 1 1 1 1 1 0 1 0 1 0 0 1 1 1 1 0 1 0 1 0 0 1 1 1 0 1 0 1 0 0 1 1 0 1 0 1 0 1 1 0 1 0 1 1 1 0 1 0 1 1 0 1 1 1 0 1 1 1 (d) So, because of (c), this code is not completely orthogonalizable.
  • 10 13.32 (a) According to Table 13.2(a), there is only one rate R = 1/2 self orthogonal code with dmin = 9: the (2,1,35) code with g(1)(D) = 1 +D7 +D10 +D16 +D18 +D30 +D31 +D35and m(a) = 35. (b) According to Table 13.3(a), there is only one rate R = 1/2 orthogonalizable code with dmin = 9: the (2,1,21) code with g(1)(D) = 1 +D11 +D13 +D16 +D17 +D19 +D20 +D21and m(b) = 21. (c) The best rate R = 1/2 systematic code with dmin = 9 is the (2,1,15) code with g(1)(D) = 1 +D +D3 +D5 +D7 +D8 +D11 +D13 +D14 +D15and m(c) = 15∗. (d) According to Table 12.1(c), the best rate R = 1/2 nonsystematic code with dfree = 9 (actually, 10 in this case) is the (2,1,6) code with g(0)(D) = 1 +D +D2 +D3 +D6, g(1)(D) = 1 +D2 +D3 +D5 +D6, and m(d) = 6. Thus m(d) < m(c) < m(b) < m(a). * This generator polynomial can be found in Table 13.1 of the first edition of this text. 13.34 (a) This self orthogonal code with g(1)(D) = 1+D2+D7+D13+D16+D17 yields the parity triangle: → 1 0 1 → 1 0 1 0 1 0 1 0 0 1 0 1 0 0 0 1 0 1 0 0 0 0 1 0 1 → 1 0 0 0 0 1 0 1 0 1 0 0 0 0 1 0 1 0 0 1 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 → 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 → 1 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 → 1 1 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 The orthogonal check sums on e(0)0 are {s0, s2, s7, s13, s16, s17}.
  • 11 (b) The block diagram of the feedback majority logic decoder for this code is: + + + + + + + + MAJORITY GATE )1( 17�"r )0( 17�"r )0( ˆ"e "sc "uˆ)0("r 17�"s 13.37 (a) These orthogonalizable code with g(1)(D) = 1 + D6 + D7 + D9 + D10 + D11 yields the parity triangle: 1 0 0 0 0 0 1 1 0 1 1 1 1 1 0 0 0 0 0 1 1 0 1 1 1 0 0 0 0 0 1 1 0 1 1 0 0 0 0 0 1 1 0 1 0 0 0 0 0 1 1 1 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 The orthogonal check sums on e(0)0 are {s0, s6, s7, s9, s1 + s3 + s10, s4 + s8 + s11}.
  • 12 (b) The block diagram of the feedback majority logic decoder for this code is: + + + + + + + MAJORITY GATE + ++ )0( 11�"r )1( 11�"r 11�" s "sc )0( "r " uˆ )0( ˆ"e
  • Chapter 16 Turbo Coding 16.1 Let u = [u0, u1, . . . , uK−1] be the input to encoder 1. Then v(1) = [v(1)0 , v (1) 1 , . . . , v (1) K−1] = u G (1) is the output of encoder 1, where G(1) is the K ×K parity generator matrix for encoder 1, i.e., v(1) = uG(1) is the parity sequence produced by encoder 1. For example, for the (2,1,2) systematic feedbacK encoder with G(1)(D) = [ 1 1 +D2 1 +D +D2 ] , the parity generator matrix is given by G(1) =  1 1 1 0 1 1 0 1 1 . . . . . . . . . . . . . . . 1 1 1 0 1 1 0 1 1 . . . . . . . . . . . . 1 1 1 0 1 1 0 1 1 . . . . . . . . . . . .  . (K ×K) Now let u′ = [u′0, u ′ 1, . . . , u ′ K−1] be the (interleaved) input to encoder 2. Then we can write u′ = u T, where T is a K×K permutation matrix with a single one in each row and column. (If T is the identity matrix, then u′ = u.) The output of encoder 2 is then given by v(2) = [v(2)0 , v (2) 1 , . . . , v (2) K−1] = u ′ G(2) = u T G(2), where G(2) is the K ×K parity generator matrix for encoder 2. 1
  • 2 Now consider two input sequences u1 and u2, both of length K. The corresponding encoder outputs in each case are given by the 3-tuples [u1,u1 G(1),u′1 G (2)] = [u1, u1 G(1),u1 T G(2)] 4 = y1 and [u2,u2 G(1),u′2 G (2)] = [u2, u2 G(1),u2 T G(2)] 4 = y2. Now consider the input sequence u = u1 + u2. The encoder output 3-tuple in this case is given by [u,u G(1),u′ G(2)] = [u, u G(1),u T G(2)] = [u1 + u2,u1 G(1) + u2 G(2),u1T G(2) + u2T G(2)] = [u1,u1 G(1),u1T G(2)] + [u2,u2 G(2),u2T G(2)] = y1 + y2. Thus superposition holds and the turbo encoder is linear. 16.3 Define C1 to be the (7,4,3) Hamming code and C2 to be the (8,4,4) extended Hamming code. C1 Codeword list: 0000000 1101000 0111001 1010001 1011010 0110010 1100011 0001011 1110100 0011100 1001101 1000101 0101110 1000110 1000111 1111111 The CWEF for C1 is given by AC11 (Z) = 3Z 2 + Z3 AC12 (Z) = 3Z + 3Z 2 AC13 (Z) = 1 + 3Z AC14 (Z) = Z 3
  • 3 C2 Codeword list: 0000 0000 11010001 01110010 10100011 10110100 01100101 11000110 00010111 11101000 00111001 10011010 10001011 01011100 10001101 10001110 11111111 The CWEF for C2 is given by AC21 (Z) = 4Z 3 AC22 (Z) = 6Z 2 AC23 (Z) = 4Z AC24 (Z) = Z 4 The CWEF for the PCBC is given by equation (16.23) as APCw (Z) = A C1 w (Z) ∗ AC2w (Z)/ ( K w ) APC1 (Z) = (3Z 2 + Z3)(4Z3)/4 = 3Z5 + Z6 APC2 (Z) = (3Z + 3Z 2)(6Z2)/6 = 3Z3 + 3Z4 APC3 (Z) = (1 + 3Z)(4Z)/4 = Z + 3Z 2 APC4 (Z) = (Z 3)(Z4)/1 = Z7 The bit CWEF’s are given by BPCw (Z) = (w K ) APCw (Z) BPC1 (Z) = ( 1 4 ) (3Z5 + Z6) = .75 Z5 + .25 Z6 BPC2 (Z) = ( 2 4 ) (3Z3 + 3Z4) = 1.5Z3 + 1.5Z4 BPC3 (Z) = ( 3 4 ) (Z + 3Z2) = .75Z3 + 2.25Z2
  • 4 BPC4 (Z) = ( 4 4 ) (Z7) = Z7 The IRWEF’s are given by APC(W,Z) =W (3Z5 + Z6) +W 2(3Z3 + 3Z4) +W 3(Z + 3Z2) +W 7Z7 BPC(W,Z) =W (.75Z5 + .25Z6) +W 2(1.5Z3 + 1.5Z4) +W 3(.75Z + 2.25Z2) +W 4Z7 To find the WEF’s, set W = Z = X: APC(X) = X4 + 6X5 + 6X6 +X7 +X11 BPC(X) = .75X4 + 3.75X5 + 2.25X6 + .25X7 +X11 16.4 The codeword IRWEF can be found from equation (16.34) as follows: APC(W,Z) = W (4.5Z4 + 3Z5 + 0.5Z6) + W 2(1.29Z2 + 2.57Z3 + 1.29Z4 + 3.86Z5 + 6.43Z6 + 3Z7 + 3.32Z8 + 3.86Z9 + 1.93Z10 + 0.43Z11 + 0.04Z12) + W 3(0.07 + 0.43Z + 0.64Z2 + 1.29Z3 + 5.57Z4 + 5.57Z5 + 7.07Z6 + 15.43Z7 + 14.14Z8 + 5.14Z9 + 0.64Z10) + W 4(3.21Z4 + 17.14Z5 + 29.29Z6 + 17.14Z7 + 3.21Z8) + W 5(0.64Z2 + 5.14Z3 + 14.14Z4 + 15.43Z5 + 7.07Z6 + 5.57Z7 + 5.57Z8 + 1.29Z9 + 0.64Z10 + 0.43Z11 + 0.07Z12) + W 6(0.04 + 0.43Z + 1.93Z2 + 3.86Z3 + 3.32Z4 + 3Z5 + 6.43Z6 + 3.86Z7 + 1.29Z8 + 2.57Z9 + 1.29Z10) + W 7(0.05Z6 + 3Z7 + 4.5Z8) +W 8Z12 To find the bit IRWEF, take each term in the expression for APC(W,Z) and divide by w/8, where w is the exponent of W. BPC(W,Z) = W (0.56Z4 + 0.38Z5 + 0.06Z6) + W 2(0.32Z2 + 0.64Z3 + 0.32Z4 + 0.97Z5 + 1.61Z6 + .75Z7 + 0.83Z8 + 0.97Z9 + 0.48Z10 + 0.11Z11 + 0.01Z12) + W 3(0.003 + 0.16Z + 0.24Z2 + 0.48Z3 + 2.09Z4 + 2.09Z5 + 2.65Z6 + 5.79Z7 + 5.30Z8 + 1.93Z9 + 0.24Z10) + W 4(1.61Z4 + 8.57Z5 + 14.65Z6 + 8.57Z7 + 1.61Z8) + W 5(0.40Z2 + 3.21Z3 + 8.84Z4 + 9.64Z5 + 4.42Z6 + 3.48Z7 3.48Z8 + 0.81Z9 + 0.40Z10 + 0.27Z11 + 0.04Z12) + W 6(0.03 + 0.32Z + 1.45Z2 + 2.90Z3 + 2.49Z4 + 2.25Z5 4.82Z6 + 2.90Z7 + 0.97Z8 + 1.93Z9 + 0.97Z10
  • 5 W 7(0.04Z6 + 2.63Z7 + 3.94Z8) +W 8Z12 To find the WEF’s, substitute X=W=Z into the equations for the IRWEF’s, as shown below. APC(X) = 0.07X3 + 1.71X4 + 7.71X5 + 5.61X6 + 11X7 + 22.29X8 + 45.21X9 + 66.79X10 + 44.79X11 + 20.14X12 + 9.71X13 + 9.46X14 + 5.14X15 + 3X16 + 0.07X17 + 1.29X18 +X20 BPC(X) = 0.03X3 + 0.48X4 + 1.45X5 + 1.21X6 + 3.84X7 + 9.96X8 + 23.71X9 + 33.39X10 + 21.19X11 + 10.71X12 + 5.82X13 + 5.04X14 + 0.96X15 + 2.20X16 + 0.04X17 + 0.96X18 +X20 16.5 Consider using an h-repeated (n, k, dmin) = (7, 4, 3) Hamming code as the constituent code, forming an (h[2n − k], k) = (10h, 4h) PCBC. The output consists of the original input bits u, the parity bits from the non-interleaved encoder v(1), and the parity bits from the interleaved encoder v(2). For the h = 4-repeated PCBC, the IRWEF of the (28,16) constituent code is A(W,Z) = [1 +W (3Z2 + Z3) +W 2(3Z + 3Z2) +W 3(1 + 3Z) +W 4Z3]4 − 1 = W (12Z2 + 4Z3) + W 2(12Z + 12Z2 + 54Z4 + 36Z5 + 6Z6) + W 3(4 + 12Z + 108Z3 + 144Z4 + 36Z5 + 108Z6 + 108Z7 + 36Z8 + 4Z9) + W 4(90Z2 + 232Z3 + 90Z4 + 324Z5 + 540Z6 + 252Z7 + 117Z8 + 108Z9 + 54Z10 + 12Z11 + Z12) + W 5(36Z + 144Z2 + 108Z3 + 432Z4 + 1188Z5 + 780Z6 + 468Z7 + 648Z8 + 432Z9 + 120Z10 + 12Z11) + W 6(6 + 36Z + 54Z2 + 324Z3 + 1296Z4 + 1296Z5 + 918Z6 + 1836Z7 + 1620Z8 + 556Z9 + 66Z10) + W 7(144Z2 + 780Z3 + 1188Z4 + 1080Z5 + 2808Z6 + 3456Z7 + 1512Z8 + 324Z9 + 108Z10 + 36Z11 + 4Z12) + W 8(36Z + 252Z2 + 540Z3 + 783Z4 + 2592Z5 + 4464Z6 + 2592Z7 + 783Z8 + 540Z9 + 252Z10 + 36Z11) +
  • 6 W 9(4 + 36Z + 108Z2 + 324Z3 + 1512Z4 + 3456Z5 + 2808Z6 + 1080Z7 + 1188Z8 + 780Z9 + 144Z10) + W 10(66Z2 + 556Z3 + 1620Z4 + 1836Z5 + 918Z6 + 1296Z7 + 1296Z8 + 324Z9 + 54Z10 + 36Z11 + 6Z12) + W 11(12Z + 120Z2 + 432Z3 + 648Z4 + 468Z5 + 780Z6 + 1188Z7 + 432Z8 + 108Z9 + 144Z10 + 36Z11) + W 12(1 + 12Z + 54Z2 + 108Z3 + 117Z4 + 252Z5 + 540Z6 + 324Z7 + 90Z8 + 232Z9 + 90Z10) + W 13(4Z3 + 36Z4 + 108Z5 + 108Z6 + 36Z7 + 144Z8 + 108Z9 + 12Z11 + 4Z12) + W 14(6Z6 + 36Z7 + 54Z8 + 12Z10 + 12Z11) + W 15(4Z9 + 12Z10) + W 16Z12. The IRWEF’s of the (40,16) PCBC are found, using APCw (Z) = ( K w )−1 [Aw(Z)]2, where K = hk = 16 is the interleaver (and input) length, to be: APC1 (Z) = 9Z 4 + 6Z5 + Z6 APC2 (Z) = 1.2Z 2 + 2.4Z3 + 1.2Z4 + 10.8Z5 + 18Z6 + 8.4Z7 + 25.5Z8 + 32.4Z9 + 16.2Z10 + 3.6Z11 + 0.3Z12 APC3 (Z) = 0.029 + 0.171Z + 0.257Z 2 + 1.543Z3 + 6.686Z4 + 6.686Z5 + 23.914Z6 + 61.714Z7 + 56.057Z8 + 61.771Z9 + 99.686Z10 + 83.314Z11 + 54.771Z12 + 48.343Z13 + 35.229Z14 + 15.429Z15 + 3.857Z16 + 0.514Z17 + 0.029Z18 APC4 (Z) = 4.451Z 4 + 22.945Z5 + 38.475Z6 + 54.989Z7 + 140.459Z8 + 194.637Z9 + 186.903Z10 + 257.697Z11 + 294.389Z12 + 216.831Z13 + 151.273Z14 + 117.156Z15 + 73.844Z16 + 36.316Z17 + 17.268Z18 + 8.229Z19 + 3.155Z20 + 0.831Z21 + 0.138Z22 + 0.013Z23 + 0.001Z24 APC5 (Z) = 0.297Z 2 + 2.374Z3 + 6.527Z4 + 14.242Z5 + 50.736Z6 + 112.549Z7 + 160.615Z8 + 315.099Z9 + 550.385Z10 + 579.363Z11 + 551.505Z12 + 611.802Z13 + 540.89Z14 + 360.791Z15 + 238.088Z16 + 158.176Z17 + 80.901Z18 + 27.297Z19 + 5.670Z20 + 0.659Z21 + 0.033Z22
  • 7 APC6 (Z) = 0.004 + 0.054Z + 0.243Z 2 + 0.971Z3 + 5.219Z4 + 17.964Z5 + 43.615Z6 + 133.355Z7 + 345.929Z8 + 533.928Z9 + 682.391Z10 + 1030.59Z11 + 1269.74Z12 + 1130.6Z13 + 993.686Z14 + 891.674Z15 + 597.803Z16 + 255.219Z17 + 65.307Z18 + 9.165Z19 + 0.544Z20 APC7 (Z) = 1.813Z 4 + 19.636Z5 + 83.089Z6 + 189.189Z7 + 341.333Z8 + 694.221Z9 + 1194.5Z10 + 1462.3Z11 + 1702.7Z12 + 2064.99Z13 + 1874.92Z14 + 1101.01Z15 + 456.243Z16 + 169.326Z17 + 61.439Z18 + 18.05Z19 + 4.116Z20 + 0.906Z21 + 0.189Z22 + 0.025Z23 + 0.001Z24 APC8 (Z) = 0.101Z 2 + 1.41Z3 + 7.955Z4 + 25.527Z5 + 67.821Z6 + 192.185Z7 + 454.462Z8 + 795.877Z9 + 1316.39Z10 + 2201.74Z11 + 2743.06Z12 + 2201.74Z13 + 1316.39Z14 + 795.877Z15 + 454.462Z16 + 192.185Z17 + 67.821Z18 + 25.527Z19 + 7.955Z20 + 1.41Z21 + 0.101Z22 APC9 (Z) = 0.001 + 0.025Z + 0.189Z 2 + 0.906Z3 + 4.116Z4 + 18.05Z5 + 61.439Z6 + 169.326Z7 + 456.243Z8 + 1101.01Z9 + 1874.92Z10 + 2064.99Z11 + 1702.7Z12 + 1462.3Z13 + 1194.5Z14 + 694.221Z15 + 341.333Z16 + 189.189Z17 + 83.089Z18 + 19.636Z19 + 1.813Z20 APC10 (Z) = 0.544Z 4 + 9.165Z5 + 65.307Z6 + 255.219Z7 + 597.803Z8 + 891.674Z9 + 993.686Z10 + 1130.6Z11 + 1269.74Z12 + 1030.59Z13 + 682.391Z14 + 533.928Z15 + 345.929Z16 + 133.355Z17 + 43.615Z18 + 17.964Z19 + 5.219Z20 + 0.971Z21 + 0.243Z22 + 0.054Z23 + 0.004Z24 APC11 (Z) = 0.033Z 2 + 0.659Z3 + 5.67Z4 + 27.297Z5 + 80.901Z6 + 158.176Z7 + 238.088Z8 + 360.791Z9 + 540.89Z10 + 611.802Z11 + 551.505Z12 + 579.363Z13 + 550.385Z14 + 315.099Z15 + 160.615Z16 + 112.549Z17 + 50.736Z18 + 14.242Z19 + 6.527Z20 + 2.374Z21 + 0.297Z22 APC12 (Z) = 0.001 + 0.013Z + 0.138Z 2 + 0.831Z3 + 3.155Z4 + 8.229Z5 + 17.268Z6 + 36.316Z7 + 73.844Z8 + 117.156Z9 + 151.273Z10 + 216.831Z11 + 294.389Z12 + 257.697Z13 + 186.903Z14 + 194.637Z15 + 140.459Z16 + 54.989Z17 + 38.475Z18 + 22.945Z19 + 4.451Z20 APC13 (Z) = 0.029Z 6 + 0.514Z7 + 3.857Z8 + 15.429Z9 + 35.229Z10 + 48.343Z11 + 54.771Z12 + 83.314Z13 + 99.686Z14 + 61.771Z15 + 56.057Z16 + 61.714Z17 + 23.914Z18 +
  • 8 6.686Z19 + 6.686Z20 + 1.543Z21 + 0.257Z22 + 0.171Z23 + 0.029Z24 APC14 (Z) = 0.3Z 12 + 3.6Z13 + 16.2Z14 + 32.4Z15 + 25.5Z16 + 8.4Z17 + 18Z18 + 10.8Z19 + 1.2Z20 + 2.4Z21 + 1.2Z22 APC15 (Z) = Z 18 + 6Z19 + 9Z20 APC16 (Z) = Z 24. Using a 4x4 row-column (block) interleaver, the minimum distance of the PCBC is 5. This comes from a weight 1 input, resulting in wH(u) = 1, wH(v(1)) = 2, and wH(v(2)) = 2, where wH(x) is the Hamming weight of x. 16.7 On page 791 in the text, it is explained that “...any multiple error event in a codeword belonging to a terminated convolutional code can be viewed as a succession of single error events separated by sequences of 0’s.” Another way to look at it is that for convolutional codewords, we start in the zero state and end in the zero state and any multiple error event can be seen as a series of deviations from the zero state. These deviations from the zero state can be separated by any amount of time spent in the zero state. For example, the codeword could have a deviation starting and another deviation ending at the same zero state, or the codeword could stay in the zero state for several input zeros. Likewise the first input bit could take the codeword out of the zero state or the last input bit could be the one to return the codeword to the zero state. So, if there are h error events with total length λ in a block codeword length K, then there must be K−λ times that state 0 occurs or, as explained in the text, K−λ 0’s. Now, in the codeword, there are K − λ+ h places where an error event can start, since the remaining λ− h places are then determined by the error events. Since there are h events, the number of ways to choose h elements from N −λ+h places gives the multiplicity of block codewords for h-error events. 16.8 The IRWEF and WEF for the PCCC in Example 16.6 is found in the same manner as for a PCBC. The following combines the equations in (16.49) for the CWEF’s: APC(W,Z) = W 2(4Z8 + 4Z10 + Z12) +W 3(1.81Z4 + 6.74Z6 + 9.89Z8 + 6.74Z10 + 1.81Z12) + W 4(0.93Z4 + 5.93Z4 + 10.59Z8 + 4.67Z10 + 3.89Z12 + 0.67Z14 + 0.33Z16) + W 5(0.33Z4 + 2.22Z6 + 6.37Z8 + 9.33Z10 + 6.81Z12 + 1.78Z14 + 0.15Z16) + W 6(0.04Z4 + 1.04Z6 + 8.15Z8 + 12.44Z10 + 5.33Z12) +W 7(5.44Z8 + 3.11Z10 + 0.44Z12) +W 9(Z12) APC(X) = 1.81X7 + 0.93X8 + 7.07X9 + 9.97X10 + 12.11X11 + 15.63X12 + 13.11X13 + 13.82X14 + 15.95X15 + 16.33X16 + 9.92X17 + 6X18 + 2.22X19 + 0.33X20 + 1.15X21
  • 9 16.9 The bit IRWEF is given by: BPC(W,Z) = W 2(0.89Z8 + 0.89Z10 + 0.22Z12) +W 3(0.60Z4 + 2.25Z6 + 3.30Z8 + 2.25Z10 + 0.60Z12) +W 4(0.41Z4 + 2.63Z6 + 4.71Z8 + 2.07Z10 + 1.73Z12 + 0.30Z14 + 0.15Z16) +W 5(0.19Z4 + 1.23Z6 + 3.54Z8 + 5.18Z10 + 3.79Z12 + 0.99Z14 + 0.08Z16) +W 6(0.02Z4 + 0.69Z6 + 5.43Z8 + 8.30Z10 + 3.55Z12) + W 7(4.23Z8 + 2.42Z10 + 0.35Z12) +W 9Z12 The bit WEF is given by: BPC(X) = 0.60X7 + 0.41X8 + 2.43X9 + 3.55X10 + 4.53X11 + 6.29X12 + 5.79X13 + 7.73X14 10.02X15 + 10.02X16 + 6.20X17 + 3.85X18 + 1.33X19 + 0.15X20 + 1.08X21 16.10 Redrawing the encoder block diagram and the IRWEF state diagram for the reversed generators, we see that the state diagram is essentially the same except that W and Z are switched. Thus, we can use equation (16.41) for the new IRWEF with W and Z reversed. A(W,Z,L) = L3W 2Z3 + L4W 4Z2 + L5(W 4Z3 +W 2Z4) + L6(2W 4Z3 +W 4Z4) + L7(W 2Z5 + 2W 4Z4 +W 4Z5 +W 6Z2) + L8(W 4Z5 + 3W 4Z4 + 2W 4Z5 + 2W 6Z3) + L9(W 2Z6 + 3W 4Z5 + 2W 4Z6 +W 4Z7 + 3W 6Z3 + 3W 6Z4). After dropping all terms of order larger than L9, the single-error event enumerators are obtained as follows: A (1) 2,3 = Z 3 A (1) 2,5 = Z 4 A (1) 2,7 = Z 5 A (1) 2,9 = Z 6 A (1) 4,4 = Z 2 A (1) 4,5 = Z 3 A (1) 4,6 = 2Z 3 + Z4 A (1) 4,7 = 2Z 4 + Z5 A(1)4,8 = 3Z 4 + 2Z5 + Z6 A(1)4,9 = 3Z 5 + 2Z6 + Z7 A (1) 6,7 = Z 2 A (1) 6,8 = 2Z 3 A (1) 6,9 = 3Z 3 + 2Z4 The double-error event IRWEF is A2(W,Z,L) = [A(W,Z,L)]2 = L6W 4Z6 + 2L7W 6Z5 + L8(2W 4Z7 + 2W 6Z6 +W 8Z4) + L9(6W 6Z6 + 2W 6Z7 + 2W 8Z5) + . . . Dropping all terms of order greater than L9 gives the double-error event enumerators A(2)2,`(Z): A (2) 4,6 = Z 6 A (2) 4,8 = 2Z 7 A (2) 6,7 = 2Z 5 A (2) 6,8 = 2Z 6 A (2) 6,9 = 6Z 6 + 2Z7 A (2) 8,8 = Z 4 A (2) 8,9 = 2Z 5
  • 10 Following the same procedure, the triple-error event IRWEF is given byA(3)(W,Z,L) = [A(W,Z,L)]3 = L9W 6Z9 + . . ., and the triple-error event enumerator is given by A(3)6,9(Z) = Z 9. Before finding the CWEF’s we need to calculate hmax. Notice that there are no double-error events of weight less than 4 and that the only triple-error event has weight 6. Also notice that, for the terminated encoder, odd input weights don’t appear. So for weight 2, hmax = 1. For weight 4 and 8, hmax = 2. For weight 6, hmax = 3. Now we can use equations (16.37) and (16.39) to find the CWEF’s as follows: A2(Z) = c[3, 1]A (1) 2,3(Z) + c[5, 1]A (1) 2,5(Z) + c[7, 1]A (1) 2,7(Z) + c[9, 1]A (1) 2,9(Z) = 3Z2 + 7Z3 + 5Z4 + Z6 A4(Z) = c[4, 1]A (1) 4,4(Z) + c[5, 1]A (1) 4,5(Z) + c[6, 1]A (1) 4,6(Z) + c[7, 1]A (1) 4,7(Z) + c[8, 1]A (1) 4,8(Z) + c[9, 1]A(1)4,9(Z) + c[6, 2]A (2) 4,6(Z) + c[8, 2]A (2) 4.8(Z) = 6Z2 + 13Z3 + 16Z4 + 10Z5 + 14Z6 + 7Z7 A6(Z) = c[7, 1]A (1) 6,7(Z) + c[8, 1]A (1) 6,8(Z) + c[9, 1]A (1) 6,9(Z) + C[6, 2]A (2) 6,7(Z) + c[8, 2]A (2) 6,8(Z) + c[9, 2]A(2)6,9(Z) + c[9, 3]A (3) 6,9(Z) = 3Z2 + 7Z3 + 3Z4 + 12Z5 + 12Z6 + 2Z7 + Z9 A8(Z) = c[8, 2]A (2) 8,8(Z) + c[9, 2]A (2) 8,9(Z) = 3Z4 + 2Z5 A quick calculation shows that the CWEF’s include a total of 127 nonzero codewords, the correct number for an (18,7) code. The IRWEF and WEF are given by: A(W,Z) = W 2(3Z2 + 7Z3 + 5Z4 + Z6) +W 4(6Z2 + 13Z3 + 16Z4 + 10Z5 + 14Z6 + 7Z7) + W 6(3Z2 + 7Z3 + 3Z4 + 12Z5 + 12Z6 + 2Z7 + Z9) +W 8(3Z4 + 2Z5) A(X) = 3X4 + 7X5 + 11X6 + 13X7 + 20X8 + 17X9 + 17X10 + 20X11 + 15Z12 + 4X13 +X15 This convolutional code has minimum distance 4, one less than for the code of Example 6.6. As in the example, we must modify the concept of the uniform interleaver because we are using convolutional constituent codes. Therefore, note that for weight 2, there are 16 valid input sequences. For weight 4, there are 66. For weight 6, there are 40, and for weight 8, there are 5. For all other weights, there are no valid input sequences.
  • 11 The CWEF’s of the (27,7) PCCC can be found as follows: APC2 (Z) = (3Z 2 + 7Z3 + 5Z4 + Z6)2/16 = 0.56Z4 + 2.62Z5 + 4.94Z6 + 4.37Z7 + 1.94Z8 + 0.87Z9 + 0.62Z10 + 0.06Z12 APC4 (Z) = (6Z 2 + 13Z3 + 16Z4 + 10Z5 + 14Z6 + 7Z7)2/66 = 0.55Z4 + 2.36Z5 + 5.47Z6 + 8.12Z7 + 10.36Z8 + 11.63Z9 + 11.06Z10 + 7.65Z11 5.09Z12 + 2.97Z13 + 0.74Z14 APC6 (Z) = (3Z 2 + 7Z3 + 3Z4 + 12Z5 + 12Z6 + 2Z7 + Z9)2/40 = 0.22Z4 + 1.05Z5 + 1.67Z6 + 2.85Z7 + 6.22Z8 + 6.3Z9 + 6.1Z10 + 7.65Z11 + 5.15Z12 + 1.35Z13 + 0.7Z14 + 0.6Z15 + 0.1Z16 + 0.02Z18 APC8 (Z) = (3Z 4 + 2Z5)2/5 = 1.8Z8 + 2.4Z9 + 0.8Z10 16.11 As noted in Problem 16.10, odd input weights do not appear. For weight 2, hmax = 1 and equation (16.55) simplifies to APC2 (Z) = 2[A (1) 2 (Z)] 2 and BPC2 (Z) = 4 K [A(1)2 (Z)] 2. For weight 4 and 8, hmax = 2, so we have APC4 (Z) = 6[A (2) 4 (Z)] 2 and BPC4 (Z) = 24 K [A(2)4 (Z)] 2. APC8 (Z) = 10080 K4 [A(2)8 (Z)] 2 and BPC4 (Z) = 80640 K5 [A(2)8 (Z)] 2. For weight 6, hmax = 3, so APC6 (Z) = 20[A (3) 6 (Z)] 2 and BPC6 (Z) = 120 K [A(3)6 (Z)] 2. Including only these terms in the approximate IRWEF’s for the PCCC gives A(W,Z) = 2W 2[A2(Z)]2 + 6W 4[A4(Z)]2 + 20W 6[A6(Z)]2 + 10080W 8 K4 [A8(Z)]2 ≈ 2W 2[A2(Z)]2 + 6W 4[A4(Z)]2 + 20W 6[A6(Z)]2 B(W,Z) = 4W 2 K [A2(Z)]2 + 24W 4 K [A4(Z)]2 + 120W 6 K [A6(Z)]2 + 80640W 8 K5 [A8(Z)]2 ≈ 4W 2 K [A2(Z)]2 + 24W 4 K [A4(Z)]2 + 120W 6 K [A6(Z)]2.
  • 12 The approximate CWEF’s from the previous problem are: A2(Z) ≈ 3Z2 + 7Z3 + 5Z4 + Z6 A4(Z) ≈ 6Z2 + 13Z3 + 16Z4 + 10Z5 + 14Z6 + 7Z7 A6(Z) ≈ 3Z2 + 7Z3 + 3Z4 + 12Z5 + 12Z6 + 2Z7 + Z9 A8(Z) ≈ 3Z4 + 2Z5. Now we obtain the approximate IRWEF’s as: A(W,Z) ≈ 2W 2Z4(3 + 7Z + 5Z2 + Z4)2 ++6W 4Z4(6 + 13Z + 16Z2 + 10Z3 + 14Z4 + 7Z5)2 + 20W 6Z4(3 + 7Z + 3Z2 + 12Z3 + 12Z4 + 2Z5 + Z7)2 B(W,Z) ≈ (4W 2Z4(3 + 7Z + 5Z2 + Z4)2 + 24W 4Z4(6 + 13Z + 16Z2 + 10Z3 + 14Z4 + 7Z5)2 + 120W 6Z4(3 + 7Z + 3Z2 + 12Z3 + 12Z4 + 2Z5 + Z7)2)/K and the approximate WEF’s are given by: APC(X) ≈ 18X6 + 84X7 + 374X8 + 1076X9 + 2408X10 + 4084X11 + 5464X12 + 6888X13 + 9362X14 + 8064X15 + 6896X16 + 7296X17 + 4414X18 + 1080X19 + 560X20 + 480X21 + 80X22 + 20X24 BPC(X) ≈ 36X6/K + 168X7/K + 1180X8/K + 4024X9/K + 9868X10/K + 17960X11/K + 24496X12/K + 32112X13/K + 47404X14/K + 42336X15/K + 37344X16/K + 41424X17/K + 25896X18/K + 6480X19/K + 3360X20/K + 2880X21/K + 480X22/K + 120X24/K 16.13 First constituent encoder: G(1)ff (D) = [1 +D +D 2 1 +D2] Second constituent encoder: G(2)ff (D) = [1 1 +D +D 2] For the first constituent encoder, the IOWEF is given by AC1(W,X,L) =WX5L3 +W 2L4(1 + L)X6 +W 3L5(1 + L)2X7 + . . . , and for the second one the IRWEF can be computed as AC2(W,Z,L) =WZ3L3 +W 2Z2L4 +W 2Z4L5 + . . . , The effect of uniform interleaving is represented by APCw (X,Z) = AC1w (X) ∗AC2w (Z)( K w )
  • 13 and for large K, this can be approximated as APCw (X,Z) ≈ ∑ 1≤ h1≤ hmax ∑ 1≤ h2≤ hmax c[h1]c[h2]( K w ) A(h1)w (X)A(h2)w (Z), where A(h1)w (X) and A (h2) w (Z) are the conditional IO and IR h-error event WEF’s of the nonsystematic and systematic constituent encoders, C1 and C2, respectively. With further approximations, we obtain APCw (X,Z) ≈ w! h1max!h2max! K(h1max+h2max−w)A(h1max)w (X)A (h2max) w (Z) and BPCw (X,Z) ≈ w K APCw (X,Z). Going directly to the large K case, for any input weight w,w ≥ 1, hmax = w, since the weight 1 single error event can be repeated w times. It follows that A(w)w (X) = X 5w, w ≥ 1 and A(w)w (Z) = Z 3w, w ≥ 1. Hence APCw (X,Z) ≈ w! w!w! K(w+w−w)X5wZ3w = Kw w! X5wZ3w and BPCw (X,Z) ≈ K(w−1) (w − 1)!X 5wZ3w. Therefore APC(W,X) ≈ ∑ 1≤w≤KW wK w w! X8w = KWX8 + K 2 2 W 2X16 + K 6 3 W 3X24 − . . . , BPC(W,X) ≈ WX8 +KW 2X16 + K 2 2 W 3X24 + . . . , the average WEF’s are
  • 14 APC(X) = KX8 + K2 2 X16 + K3 6 X24 + . . . , BPC(X) = X8 +KX16 + K2 2 X24 + . . . , and the free distance of the code is 8. 16.14 The encoder and augmented modified state diagram for this problem are shown below: S0 S2 S0 S3 S1 LWZ LW LWZ LWL LZ LZ (a) (b) (a) Encoder and (b) Augmented Modified State Diagram for G(D) = 1+D+D 2 1+D Note that in order to leave from and return to state S0 (i.e., terminating a K − 2 bit input sequence), an even overall-input (including termination bits) weight is required. Examination of the state diagram reveals 6 paths that contain overall-input weight W < 6 (“· · ·” means an indefinite loop around S3): Path Function {S0, S1, S2, S0} L3W 2Z3 {S0, S1, S2, S1, S2, S0} L5W 4Z4 {S0, S1, S3, · · · , S3, S2, S0} L4W 2Z21−ZL {S0, S1, S2, S1, S3, · · · , S3, S2, S0} L6W 4Z31−ZL {S0, S1, S3, · · · , S3, S2, S1, S2, S0} L6W 4Z31−ZL {S0, S1, S3, · · · , S3, S2, S1, S3, · · · , S3, S2, S0} L7W 4Z2(1−ZL)2 resulting in the single error event IRWEF for W < 6 of A(1)(W,Z,L) = L3W 2Z3 + L5W 4Z4 + L4W 2Z2 + 2L6W 4Z3 1− ZL + L7W 4Z2 (1− ZL)2 , and for both W < 6 and L < 10,
  • 15 A(1)(W,Z,L) = L3W 2Z3 + L5W 4Z4 + [ 5∑ n=0 L4+nW 2Z2+n ] +[ 2 3∑ n=0 L6+nW 4Z3+n ] + [ 2∑ n=0 (n+ 1)L7+nW 4Z2+n ] . The IRWEF for double error events for W < 6 is then A(2)(W,Z,L) = [A(1)(W,Z,L)]2 = L6W 4Z6 + 2L7W 4Z5 1− ZL + L8W 4Z4 (1− ZL)2 , and for both W < 6 and L < 10, A(2)(W,Z,L) = L6W 4Z6 + [ 2 2∑ n=0 L7+nW 4Z5+n ] + [ 1∑ n=0 (n+ 1)L8+nW 4Z4+n ] . Thus the single- and double-error event enumerators for W < 6 and L < 10 are A (1) 2,3(Z) = Z 3 A (1) 2,4+n(Z) = Z 2+n A (1) 4,5(Z) = Z 4 A (1) 4,6+n(Z) = 2Z 3+n A (1) 4,7(Z) = Z 2 A (1) 4,8(Z) = 2Z 3 A (1) 4,9(Z) = 3Z 4 A (2) 4,6(Z) = Z 6 A (2) 4,7+n(Z) = 2Z 5+n A (2) 4,8(Z) = Z 4 A (2) 4,9(Z) = 2Z 5, and thus, up to Z7, A (1) 2 (Z) = Z 3 + Z2 1− Z = Z 3 + K−4∑ n=0 Z2+n = Z2 + 2Z3 + Z4 + Z5 + Z6 + Z7 + · · ·+ Zn + · · · A (2) 4 (Z) = Z 6 + 2Z5 1− Z + Z4 (1− Z)2 = Z 6 + 2 K−7∑ n=0 Z5+n + K−8∑ n=0 (n+ 1)Z4+n = Z4 + 4Z5 + 6Z6 + 6Z7 + · · ·+ (n− 1)Zn + · · · . The multiple turbo code, as shown in Figure 16.2, contains 3 encoders, and it is assumed that the interleavers are uniformly distributed, the block size K is large, and the code uses the same constituent encoders. Assume that a particular input sequence of weight w enters the first encoder, generating the parity weight enumerator APCw (Z). Then, by the definition of the uniform interleaver, the second and third encoders will generate any of the weights in APCw (Z) with equal probability. The codeword CWEF of this (4K,K − 2) terminated multiple turbo code is given by APCw (Z) = [ hmax∑ h1=1 c[h1, w]A(h1)w (Z) ][ hmax∑ h2=1 c[h2, w] ( K w )−1 A(h2)w (Z) ][ hmax∑ h3=1 c[h3, w] ( K w )−1 A(h3)w (Z) ] = hmax∑ h1=1 hmax∑ h2=1 hmax∑ h3=1 ( K w )−2 c[h1, w] c[h2, w] c[h3, w] A(h1)w (Z) A (h2) w (Z) A (h3) w (Z) ,
  • 16 where hmax is the maximum number of error events for the particular choice of w and c[h,w] =( K − h+ w w ) . With K À h and K À w, c[h,w] ≈ ( K h ) ≈ Khh! , and hmax = bw2 c. These approximations, along with keeping only the highest power of K, i.e., the term corresponding to h1 = h2 = h3 = hmax, result in the codeword CWEF APCw (Z) ≈ (w!)2 (hmax!)3 K(3hmax−2w) [ A(hmax)w (Z) ]3 and the bit CWEF BPCw (Z) = w K APCw (Z) ≈ w (w!)2 (hmax!)3 K(3hmax−2w−1) [ A(hmax)w (Z) ]3 . Then, for w < 6, the approximate IRWEF’s for this PCCC are APC(W,Z) ≈ 5∑ w=2 WwAPCw (Z) and B PC(W,Z) ≈ 5∑ w=2 WwBPCw (Z), and the WEF’s are APC(X) = APC(X,X) and BPC(X) = BPC(X,X). From the above,[ A (1) 2 (Z) ]3 = Z9 + 3Z8 1− Z + 3Z7 (1− Z)2 + Z6 (1− Z)3 = Z6 + 6Z7 + 15Z8 + 23Z9 + 30Z10 + 39Z11 + · · ·+ ( n2 − 3n− 10 2 ) Zn + · · · and[ A (2) 4 (Z) ]3 = Z18 + 6Z17 1− Z + 15Z16 (1− Z)2 + 20Z15 (1− Z)3 + 16Z14 (1− Z)4 + 6Z13 (1− Z)5 + Z12 (1− Z)6 = Z12 + 12Z13 + 66Z14 + 226Z15 + 561Z16 + 1128Z17 + 1995Z18 + 3258Z19 + 5028Z20 + · · ·+ (−21720− 7046n+ 3015n2 − 155n3 − 15n4 + n5 120 ) Zn + · · · . (a) Using the above, the approximate CWEF’s up to Z14 are APC2 (Z) ≈ 4K [ A (1) 2 (Z) ]3 = 4KZ 6 + 24K Z 7 + 60K Z 8 + 92K Z 9 + 120K Z 10 + 156K Z 11+ 196 K Z 12 + 240K Z 13 + 288K Z 14 + · · ·+ 2 ( n2−3n−10 K ) Zn + · · · APC4 (Z) ≈ 72K2 [ A (2) 4 (Z) ]3 ≈ 72K2Z12 + 864K2 Z13 + 4752K2 Z14 + · · · APC3 (Z) = A PC 5 (Z) = 0 BPC2 (Z) = 2 K [ APC2 (Z) ] ≈ 8K2Z6 + 48K2Z7 + 120K2 Z8 + 184K2 Z9 + 240K2 Z10 + 312K2 Z11+ 392 K2 Z 12 + 480K2 Z 13 + 576K2 Z 14 + · · ·+ 4 ( n2−3n−10 K2 ) Zn + · · · BPC4 (Z) = 4 K [ APC4 (Z) ] ≈ 288K3 Z12 + 3456K3 Z13 + 19008K3 Z14 + · · · BPC3 (Z) = B PC 5 (Z) = 0
  • 17 (b) Using the above, the approximate IRWEF’s up to W 5 and Z14 are APC(W,Z) ≈ W 2 [ 4KZ6 + 24K Z7 + 60K Z8 + 92K Z9 + 120K Z10 + 156K Z11+ 196 K Z 12 + 240K Z 13 + 288K Z 14 + · · ·+ 2 ( n2−3n−10 K ) Zn + · · · ] + W 4 [ 72 K2Z 12 + 864K2 Z 13 + 4752K2 Z 14 + · · ·] and BPC(W,Z) ≈ W 2 [ 8K2Z6 + 48K2Z7 + 120K2 Z8 + 184K2 Z9 + 240K2 Z10 + 312K2 Z11+ 392 K2 Z 12 + 480K2 Z 13 + 576K2 Z 14 + · · ·+ 4 ( n2−3n−10 K2 ) Zn + · · · ] + W 4 [ 288 K3 Z 12 + 3456K3 Z 13 + 19008K3 Z 14 + · · ·] (c) Using the above, the approximate WEF’s up to X14, neglecting any higher power of K terms, are APC(X) ≈ 4 K X8 + 24 K X9 + 60 K X10 + 92 K X11 + 120 K X12 + 156 K X13 + 196 K X14 + · · · and BPC(X) ≈ 8 K2 X8 + 48 K2 X9 + 120 K2 X10 + 184 K2 X11 + 240 K2 X12 + 312 K2 X13 + 392 K2 X14 + · · · (d) The union bounds on the BER and WER for K = 103 and K = 104, assuming a binary input, unquantized output AWGN channel are shown below. The plots were created using the loose bounds WER ≤ APC(X) ∣∣∣∣ X=e −REb N0 and BER ≤ BPC(X) ∣∣∣∣ X=e −REb N0 where R = 1/4 is the code rate and EbN0 is the SNR. 10-2 10-3 10-1 10-4 10-5 10-6 10-7 1 2 3 4 5 6 7 - - - - - N = 1000 ______ N = 10000 Eb/N0 (dB) (a) Word Error Rate
  • 18 Eb/N0 (dB) (b) Bit Error Rate 10-5 10-6 10-4 10-7 10-8 10-9 10-10 1 2 3 4 5 6 7 - - - - - N = 1000 ______ N = 10000 16.15 (a) [ 1 11+D ] i. Weight 2 Minimum parity weight generating information sequence = 1 +D. Corresponding parity weight = 1. In a PCCC (rate 1/3) we therefore have (assuming a similar input after interleaving): Information weight = 2; Parity weight = 1 + 1 = 2; Total weight = 4. ii. Weight 3 For this encoder a weight 3 input does not terminate the encoder and hence is not possible. (b) [ 1 1+D 2 1+D+D2 ] i. Weight 2 Minimum parity weight generating information sequence = 1 +D3. Corresponding parity weight = 4. In a PCCC we therefore have: Information weight = 2; Parity weight = 4 + 4 = 8; Total weight = 10. ii. Weight 3 Minimum parity weight generating information sequence = 1 +D +D2. Corresponding parity weight = 2. In a PCCC we therefore have: Information weight = 3; Parity weight = 2 + 2 = 4; Total weight = 7. (c) [ 1 1+D 2+D3 1+D+D3 ] i. Weight 2 Minimum parity weight generating information sequence = 1 +D7. Corresponding parity weight = 6. In a PCCC we therefore have: Information weight = 2; Parity weight = 6 + 6 = 12; Total weight = 14.
  • 19 ii. Weight 3 Minimum parity weight generating information sequence = 1 +D +D3. Corresponding parity weight = 4. In a PCCC we therefore have: Information weight = 3; Parity weight = 4 + 4 = 8; Total weight = 11. (d) [ 1 1+D+D 2+D4 1+D3+D4 ] i. Weight 2 Minimum parity weight generating information sequence = 1 +D15. Corresponding parity weight = 10. In a PCCC we therefore have: Information weight = 2; Parity weight = 10 + 10 = 20; Total weight = 22. ii. Weight 3 Minimum parity weight generating information sequence = 1 +D3 +D4. Corresponding parity weight = 4. In a PCCC we therefore have: Information weight = 3; Parity weight = 4 + 4 = 8; Total weight = 11. (e) [ 1 1+D 4 1+D+D2+D3+D4 ] i. Weight 2 Minimum parity weight generating information sequence = 1 +D5. Corresponding parity weight = 4. In a PCCC we therefore have: Information weight = 2; Parity weight = 4 + 4 = 8; Total weight = 10. ii. Weight 3 For this encoder a weight 3 input does not terminate the encoder and hence is not possible. In all cases the input weight two sequence gives the free distance codeword for large K. 16.16 For a systematic feedforward encoder, for input weight w, the single error event for input weight 1 can occur w times. Therefore hmax, the largest number of error events associated with a weight w input sequence, is equal to w. That is, the maximum number of error events produced by a weight w input sequence occurs by repeating the single error event for input weight 1 w times. Thus, A(w)w (Z) = [A (1) 1 (Z)] w. This is not true for feedback encoders. Feedback encoders require at least a weight 2 input sequence to terminate. Thus, for an input sequence of weight 2w, hmax = w, i.e., we have a maximum of w error events. This occurs when an error event caused by a weight 2 input sequence is repeated w times (for a total input weight of 2w). Therefore A2w (w)(Z) = [AZ (1)(Z)]w.
  • 20 m0 m1 m2 u v(()) V(2) V(n-1) V(1) Qm -1 16.19 An (n, 1, v) systematic feedback encoder can be realized by the following diagram: In the figure, the dashed lines indicate potential connections that depend on the encoder realization. It can be seen from the encoder structure that the register contents are updated as mi,t = mi−1,t−1 , i = 1, 2, . . . , v − 1 and m0,t = mv−1,t−1 + ut + v−2∑ i=0 aimi,t−1, where t is the time index, ut represents the input bit at time t, and the coefficients ai depend on the particular encoder realization. When the encoder is in state S2v−1 = (0 0 . . . 1) at time t− 1, it follows from the given relations that mi,t = mi−1,t−1 = 0, i = 1, 2, . . . , v − 1 and m0,t = mv−1,t−1 + ut + v−2∑ i=0 aimi,t−1 = 1 + ut. Thus if the input bit ut = 1, the first memory position at time t is also 0, and the encoder is in the all-zero state. 16.20 For the primitive encoder D with Gp(D) = [ 1 D 4+D2+D+1 D4+D3+1 ] , the shortest terminating weight 2 input sequence is u(D) = 1 +D2 4−1 = 1 +D15,
  • 21 which generates the parity sequence v(D) = (1 +D3 +D4 +D6 +D8 +D9 +D10 +D11)(D4 +D2 +D + 1) = 1 +D +D2 +D3 +D4 +D8 +D11 +D12 +D14 +D15. Thus Zmin = 10 and deff = 2 + 2Zmin = 2 + 2× 10 = 22. For the nonprimitive encoder E withGnp(D) = [ 1 D 4+1 D4+D3+D2+D+1 ] , the shortest terminating weight 2 input sequence is u(D) = 1 +D5, which generates the parity sequence v(D) = (D + 1)(D4 + 1) = 1 +D +D4 +D5. Thus Zmin = 4 and deff = 2 + 2Zmin = 10.
  • Chapter 20 20.1 If an (n, k) code C is designed to correct all the bursts of length l or less and simultaneously to detect all the bursts of length d or less with d ≥ l, then no burst of length l + d or less can be a codeword. Suppose there is a burst x of length l + d or less that is a codeword in C. We can decompose this burst x into two bursts y and z such that: (1) x = y + z; (2) the length of y is l or less and the length of z is d or less. Since the code is designed to correct all the bursts of length l or less, y must be a coset leader in a standard array for the code. Since y + z = x is codeword, then z must be in the same coset as y. As a results, z has the same syndrome as y. In this case, if z occurs as an error burst during the transmission of a codeword in C, then y will be taken as the error burst for correction. This will result in an incorrect decoding and hence the decoder will fail to detect z which contradicts to the fact that the code is designed to correct all the bursts of lengths l or less and simultaneously to detect all the bursts of length d or less with d ≥ l. Therefore, a burst of length l + d or less can not be a codeword. Then it follows from Theorem 20.2 that n− k ≥ l + d. 20.2 An error-trapping decoder is shown in Figure P20.2. Gate 1 Buffer Register Gate 2 + Gate 4 Gate 3 + Syndrome Register . . . Test All 0’s . . . l stages Figure P20.2 An error-trapping decoder. The decoding operation is given below: 1
  • Step 1. The received polynomial r(x) is shifted into the buffer and syndrome registers simulta- neously with Gates 1 and 3 turned on and all the other gates turned off. Step 2. As soon as the entire r(x) has been shifted into the syndrome register, the contents of the syndrome form the syndrome S(n−k)(x) of r(n−k)(x). If the n− k− l leftmost stages of the syndrome register contain all zeros, the error burst is confined to the positions Xn−l, . . ., Xn− 1. Gate 2 and Gate 4 are activated. The received information digits are read out of the buffer register and connected by the error digits shifted out from the syndrome register. If n− k − l leftmost stages of the syndrome register does not contain all zeros, go to step 3. Step 3. The syndrome register starts to shift with Gate 3 on. As soon as its n − k − l leftmost stages contain only zeros, its l rightmost stages contain the burst-error pattern. The error correction begins. there are three cases to be considered. Step 4. If the n− k − l leftmost stages of the syndrome register contains all zeros after ith shift 0 < i ≤ 2k−n (Assume 2k−n ≥ 0, otherwise, no this step), the error burst is confined to positions Xn−i−1−l, . . ., Xn−i−1. In this event, Gate 2 is first activated and the i high position information digits are shifted out. Then Gate 4 is activated, the rest information digits are read out and corrected by the error digit shifted out from the syndrome register. Step 5. If the n − k − l leftmost stages of the syndrome register contains all zeros after the ith shift k ≤ i ≤ n− l, the errors of the burst e(x) are confined to the parity-check positions of r(x). In this event, the k received information digits in the buffer are error free. Gate 2 is activated and the k error-free information digits in the buffer are shifted out to the data sink. Step 6. if the n−k−l leftmost stages of the syndrome register contains all zeros after (n−l+i)th shift 0 < i < l, the error burst is confined to positions Xn−i, . . ., Xn−1, X0, X l−i−1. In this event, the l− i digits contained in the l− i rightmost stages of the syndrome register match the errors at the parity-check positions of r(X), and the i digits contained int the next i stages of the syndrome register match the errors at the positions Xn−i, . . ., Xn−1 of r(X). At this instant, a clock starts to count from (n − l + i + 1). The syndrome register is then shifted with Gate 3 turned off. As soon as the clock has counted up to n, the i rightmost digits in the syndrome register match the errors at the positions Xn−i, . . ., Xn−l, Xn−1 of r(X). Gate 4 and Gate 2 are then activated. The received information digits are read out of the buffer register and corrected by the error digits shifted out from 2
  • the syndrome register. If the n − k − l stages of the syndrome never contain all zeros after all the steps above, an uncorrectable burst of errors has been detected. 20.4 The generator polynomial of the desired Fire code is g(X) = (X2l−1 + 1)p(X) Since the code is designed to correct all the bursts of length 4 or less, l = 4. Hence g(X) = (X7 + 1)(1 +X +X4) = 1 +X +X4 +X7 +X8 +X11. Since p(X) is a primitive polynomial of degree 4, its period ρ is 24 − 1 = 15. Hence the length of the code is n = LCM(7, 15) = 105. Since the degree of the generator polynomial of the code is 11 which is the number of parity check digits of the code, the code is a (105, 94) Fire code. The decoder of this code is shown in Figure P20.4. + + + ++ Test for all 0’s Buffer Register Gate 1 Gate 4 + Gate 3 Gate 2 Output Input r(x) 1 X X4 X7 X8 X11 Figure P20.4 A decoder for the (105, 94) Fire code. 3
  • 20.5 The high-speed decoder for the (105, 94) Fire code designed in Problem 20.4 is shown in Figure P20.5. + Error Pattern Register Comparator test for a match Test for all 0’s ++ Buffer Register r(X) Input Error Location Register Computation for n-q Counter 1 Counter 2 Figure P20.5 A high-speed decoder for the (105, 94) Fire code. 20.6 We choose the second code given in Table 20.3 which is a (15, 9) code and capable of correct- ing any burst of length 3 or less. The burst-error-correcting efficiency z = 1. The generator polynomial of this code is g(X) = 1+X3+X4+X5+X6. Suppose we interleave this code by a degree λ = 17. This results in a (255, 153) cyclic code that is capable of correcting any single burst of length 51 or less and has burst-error-correcting efficiency z = 1. The generator of this code is g′(X) = g(X17) = 1 +X51 +X68 +X85 +X102. A decoder for this code can be implemented with a de-interleaver and the decoder for the (15, 9) code as shown in Figure P20.6. First the received sequence is de-interleaved and arranged as a 17 × 15 array. Then each row of the array is decoded based on the base (15, 9) code. De-interleaver Buffer Decoder C Figure P20.6 A decoder for the (255, 153) burst-error-correcting code designed above. 20.7 A codeword in the interleaved code Clλ is obtained by interleaving l codewords from the (n, k) base code C. Let v1(X), v2(X), . . . , vl(X) be l codewords in C. Then the codeword in Clλ 4
  • obtained by interleaving these l codewords in C in polynomial form is v(X) = v1(X l) +Xv2(X l) +X2v3(X l) + . . .+X l−1vl(X l). Since vi(X) is divisible by g(X) for 1 ≤ i ≤ l, vi(X l) is divisible by g(X l). Therefore v(X) is divisible by g(X l). The degree of g(X l) is (n − k)l. Since g(X l) is a factor of X ln + 1, g(X l) generates a (ln, lk) cyclic code. 20.8 In order to show that the Burton code is capable of correcting all phased bursts confined to a single subblock of m digits, it is necessary and sufficient to prove that no two such bursts are in the same coset of a standard array for the code. Let XmiA(X) and XmjB(X) be the polynomial representations of two bursts of length l1 and l2 with l1 ≤ m and l2 ≤ m confined to ith and jth subblocks (0 ≤ i ≤ σ, 0 ≤ j ≤ σ), respectively, where A(X) = 1 + a1X + . . .+ al1−2X l1−2 +X l1−1, and B(X) = 1 + b1X + . . .+ bl2−2X l2−2 +X l2−1. Suppose XmiA(X) and XmjB(X) are in the same coset. Then V(X) = XmiA(X) + XmjB(X) must be a code polynomial and divisible by the generator polynomial g(X) = (Xm + 1)p(X). Assume mj ≥ mi, then mj −mi = qm and V(X) = Xmi(A(X) +B(X)) +XmiB(X)(Xqm + 1). Since Xm − 1 is a factor of the generator g(X), V(X) must be divisible by Xm − 1. Note that Xqm + 1 is divisible by Xm + 1, and Xmi and Xm + 1 are relatively prime. Since V(X) and XmiB(X)(Xqm+1) are divisible by Xm+1, A(X)+B(X) must be divisible by Xm + 1. However both degrees l1 and l2 of A(X) and B(X) are less than m, the degree of A(X)+B(X) is less than m. Hence A(X)+B(X) is not divisible by Xm+1. For V(X) to be divisible by Xm + 1, A(X) and B(X) must be identical, i.e., A(X) = B(X). Therefore, V(X) = XmiB(X)(Xqm + 1). 5
  • Since the degree of B(X) is less than m, B(X) and p(X) must be relatively prime. Also Xmi and p(X) are relatively prime. Since V(X) is assumed to be a code polynomial, Xqm + 1 must be divisible by both p(X) and Xm − 1. This implies that Xqm + 1 is divisible by (Xm + 1)p(X) and qm must be a multiple of n = σm. This is not possible, since q = j − i is less than σ. Therefore V(X) can not be a code polynomial and the bursts XmiA(X) and XmiB(X) can not be in the same coset. Consequently, all the phased bursts confined to a single subblock of m digits are in different cosets of a standard array for the Burton code generated by (Xm + 1)p(X) and they are correctable. 20.9 Choose p(X) = 1 +X2 +X5 which is a primitive polynomial with period ρ = 25 − 1 = 31. Then generator polynomial of the Burton code C with m = 5 is g(X) = (Xm + 1)p(X) = 1 +X2 +X7 +X10. The length of the code is n = LCM(31, 5) = 155. Hence the code is a (155, 145) code that is capable of correcting any phased burst confined to a single subblock of 5 digits. If the code is interleaved to a degree λ = 6, we obtain a (930, 870) code C6 generated by g(X6) = 1 +X12 +X42 +X60. This code is capable of correcting any burst of length 5× 5 + 1 = 26 or less. 20.10 From Table 9.3, code (164, 153) has burst-correcting-capability l = 4. If it is interleaved to degree λ = 6, then λn = 6 × 164 = 984, λk = 6 × 153 = 918, and λl = 6 × 4 = 24. The interleaved code is a (984, 918) code with burst-error-correcting capability 24. The interleaved Burton code of Problem 20.9 is a (930, 870) code with burst-error-correcting capability 26. The burst-error-correcting efficiency of the (984, 918) code is z = 2λl λn− λk = 48 984− 918 = 8 11 . The efficiency of the interleave Burton code of Problem 20.9 is z′ = 2[(λ− 1)m+ 1] 2λm = 52 60 = 13 15 . Therefore, z′ ≥ z. The interleaved Burton code is more powerful and efficient. 6
  • 20.11 The (15, 7) BCH code has random error-correcting capability 2. Interleaving this code to a degree 7, we obtain a (105, 49) cyclic code that is capable of correcting any combination of two random bursts, each of length 7 or less. Of course, it is capable of correcting any single burst of length 14 or less. In decoding, a received sequence is de-interleaved and arranged into a 7× 15 array. Each row of the array is decoded based on the (15, 7) BCH code. 20.12 If each symbol of the (31, 15) RS code over GF(25) is replaced by its corresponding 5-bit byte, we obtain a (155, 75) binary code. Since the (31, 15) RS code is capable of correcting 8 or fewer random symbol errors, the binary (155, 75) code is capable of correcting: (1) any single burst of length (8− 1)× 5 + 1 = 36 or less; or (2) correcting any combination random bursts that result in 8 or fewer random symbol errors for the (31, 15) RS code. 20.13 From Problem 20.4, the generator polynomial g(X) = 1 +X +X4 +X7 +X8 +X11 generates a (105, 94) Fire code. If it is shortened by deleting the 15 high-order message digits, it becomes a (90, 79) code. Dividing Xn−k+15 = X26 by g(X), we obtain a remainder ρ(X) = X10+X8+X5+X3+X . Then a decoder for the (105, 94) shortened Fire code is shown in Figure P20.13 Test for all 0’s Gate 1 Buffer Register Gate 2 Gate 4 Gate 3 Output 1 x x x 3 x4 x 5 x 7 x 8 x 8 x 11 Figure P20.13 A decoder for the (90, 79) Fire code. 20.14 Let α be a primitive element of the Galois field GF(26). The order of α is 26− 1 = 63 and the minimal polynomial of α is Φ(X) = 1 +X +X6. 7
  • Since 63 = 7× 9, we can factor X63 + 1 as follows: X63 + 1 = (X7 + 1)(1 +X7 +X14 +X21 +X28 +X35 +X42 +X49 +X56). The code generated by the polynomial g1(X) = (X 7 + 1)(1 +X +X6) is a Fire code of length 63 which is capable of correcting any single error burst of length 4 or less. Let g2(X) be the generator polynomial of the double-error-correcting BCH code of length 63. From Table 6.4, we find that g2(X) = (1 +X +X 6)(1 +X +X2 +X4 +X6). The least common multiple of g1(X) and g2(X) is g(X) = (1 +X7)(1 +X +X6)(1 +X +X2 +X4 +X6). Hence, g(X) generates a (63, 44) cyclic code which is a modified Fire code and is capable of correcting any single error burst of length 4 or less as well as any combination of two or fewer random errors. 8
Fly UP