An optimized framework for degree distribution in LT codes based on power law

7
J. Cent. South Univ. (2013) 20: 26932699 DOI: 10.1007/s11771-013-1785-3 An optimized framework for degree distribution in LT codes based on power law Asim Muhammad, Choi GoangSeog Department of Information and Communication Engineering, Chosun University, 309, Pilmoon-daero, Dong, Gwangju, 501-759, Korea © Central South University Press and Springer-Verlag Berlin Heidelberg 2013 Abstract: LT codes are practical realization of digital fountain codes, which provides the concept of rateless coding. In this scheme, encoded symbols are generated infinitely from k information symbols. Decoder uses only (1+α)k number of encoded symbols to recover the original information. The degree distribution function in the LT codes helps to generate a random graph also referred as tanner graph. The artifact of tanner graph is responsible for computational complexity and overhead in the LT codes. Intuitively, a well designed degree distribution can be used for an efficient implementation of LT codes. The degree distribution function is studied as a function of power law, and LT codes are classified into two different categories: SFLT and RLT codes. Also, two different degree distributions are proposed and analyzed for SFLT codes which guarantee optimal performance in terms of computational complexity and overhead. Key words: fountain codes; degree distribution; overhead; computational complexity; power law 1 Introduction Digital fountain codes are rateless erasure correcting codes that were first introduced by BYERS et al [1] and were mainly used for the robust and scalable transmission of data over a heterogeneous environment. Erasures correcting codes provide protection to the transmitting information from erasures caused by the channel and ensures the recovery of lost information without requesting any retransmission. These codes provide reliability in various network applications such as broadcasting, parallel downloading, and video streaming. Digital fountain codes can generate a limitless amount of encoded symbols from any k information symbols. Initially, original information is divided into a defined number of pieces and different subsets of these pieces are then chosen randomly for the generation of encoding symbols. The original information is then recovered when a sufficient amount of encoded symbols are received. With good fountain codes, the total number of encoded symbols needed for decoding is close to the size of the original information symbols, although some overhead is necessary. The important characteristics of these codes are that they are non-systematic codes, i.e., no specific order is required for decoding. As soon as some certain numbers of encoded symbols are received, the original information is recovered with maximum probability. Note that different types of coding schemes possess the property of digital fountain to some extent, which includes low density parity check (LDPC) and Reed Solomon (RS) codes. But the most successful examples of rateless codes are Luby transform (LT) and Raptor codes with the benefits of efficiency and low computational complexity in the encoding and decoding processes. LT codes are proposed by LUBY [2], in which the encoding and decoding functions depend on the probability distribution function. The probability distribution function randomly selects the given information symbols to generate the encoded symbols. The decoding function uses the same distribution functions to select the received symbols from the channel and recover the original information symbols. The probability distribution can be termed degree distribution and is a crucial design part of the LT codes. LUBY [2] proposed two degree distributions to cooperate with the LT codes, which includes ideal soliton distribution (ISD) and robust soliton distribution (RSD). ISD is theoretically optimal and RSD is practically viable only when the length of the information symbol approaches to infinity. In practice, only a limited size of Foundation item: This work was supported by Research Fund Chosun Univerity, 2011 Received date: 20120720; Accepted date: 20130107 Corresponding author: Choi GoangSeog; Tel: +82622307716; E-mail: [email protected]

Transcript of An optimized framework for degree distribution in LT codes based on power law

J. Cent. South Univ. (2013) 20: 2693−2699 DOI: 10.1007/s11771-013-1785-3

An optimized framework for degree distribution in LT codes based on power law

Asim Muhammad, Choi GoangSeog

Department of Information and Communication Engineering, Chosun University,

309, Pilmoon-daero, Dong, Gwangju, 501-759, Korea

© Central South University Press and Springer-Verlag Berlin Heidelberg 2013

Abstract: LT codes are practical realization of digital fountain codes, which provides the concept of rateless coding. In this scheme, encoded symbols are generated infinitely from k information symbols. Decoder uses only (1+α)k number of encoded symbols to recover the original information. The degree distribution function in the LT codes helps to generate a random graph also referred as tanner graph. The artifact of tanner graph is responsible for computational complexity and overhead in the LT codes. Intuitively, a well designed degree distribution can be used for an efficient implementation of LT codes. The degree distribution function is studied as a function of power law, and LT codes are classified into two different categories: SFLT and RLT codes. Also, two different degree distributions are proposed and analyzed for SFLT codes which guarantee optimal performance in terms of computational complexity and overhead. Key words: fountain codes; degree distribution; overhead; computational complexity; power law

1 Introduction

Digital fountain codes are rateless erasure correcting codes that were first introduced by BYERS et al [1] and were mainly used for the robust and scalable transmission of data over a heterogeneous environment. Erasures correcting codes provide protection to the transmitting information from erasures caused by the channel and ensures the recovery of lost information without requesting any retransmission. These codes provide reliability in various network applications such as broadcasting, parallel downloading, and video streaming. Digital fountain codes can generate a limitless amount of encoded symbols from any k information symbols. Initially, original information is divided into a defined number of pieces and different subsets of these pieces are then chosen randomly for the generation of encoding symbols. The original information is then recovered when a sufficient amount of encoded symbols are received. With good fountain codes, the total number of encoded symbols needed for decoding is close to the size of the original information symbols, although some overhead is necessary. The important characteristics of these codes are that they are non-systematic codes, i.e., no specific order is required for decoding. As soon as

some certain numbers of encoded symbols are received, the original information is recovered with maximum probability. Note that different types of coding schemes possess the property of digital fountain to some extent, which includes low density parity check (LDPC) and Reed Solomon (RS) codes. But the most successful examples of rateless codes are Luby transform (LT) and Raptor codes with the benefits of efficiency and low computational complexity in the encoding and decoding processes.

LT codes are proposed by LUBY [2], in which the encoding and decoding functions depend on the probability distribution function. The probability distribution function randomly selects the given information symbols to generate the encoded symbols. The decoding function uses the same distribution functions to select the received symbols from the channel and recover the original information symbols. The probability distribution can be termed degree distribution and is a crucial design part of the LT codes. LUBY [2] proposed two degree distributions to cooperate with the LT codes, which includes ideal soliton distribution (ISD) and robust soliton distribution (RSD). ISD is theoretically optimal and RSD is practically viable only when the length of the information symbol approaches to infinity. In practice, only a limited size of

Foundation item: This work was supported by Research Fund Chosun Univerity, 2011 Received date: 2012−07−20; Accepted date: 2013−01−07 Corresponding author: Choi GoangSeog; Tel: +82−62−230−7716; E-mail: [email protected]

J. Cent. South Univ. (2013) 20: 2693−2699

2694

information symbols can be transmitted because of many design constraints. Therefore, the research community is searching for an optimal degree distribution that can enhance the performance of LT codes, irrespective of the length of information symbols.

Different ways are adopted to improve the performance of LT codes; for example, TARUS et al [3] proposed a joint decoding strategy based on belief propagation and Gaussian elimination to reduce the computational cost and overhead of LT codes. Optimal degree distribution can also be considered for an efficient implementation of LT codes. For example, HYYTIÄ et al [4] proposed an algorithm for iterative optimization of the degree distribution, using an important sampling approach. In Refs. [5] and [6], some new degree distribution functions are proposed that use a varying ripple size R. In Refs. [7−8], the multi-objective evolutionary algorithm (MOEA) is used to optimize the degree distribution of LT codes in terms of operational cost and overhead.

In this work, the degree distribution is analyzed from the perspective of power law. LT codes are classified into two categories: scale free LT (SFLT) codes and random LT (RLT) codes by analyzing the behavior of degree distribution functions. Two different degree distribution functions: Pareto degree distribution (PTD) and power degree distribution (PDD) functions are proposed for SFLT codes. Both the degree distribution functions significantly perform better than the conventional degree distribution function in terms of overhead and computational complexity. 2 LT codes

LT codes are rateless erasure codes with efficient encoding and decoding algorithms. In this coding scheme, the original information, N, is divided into k information symbols such that k=N/l, where l is the packet payload size. From these k information symbols, a potentially infinite amount of encoded symbols are generated. These encoded symbols are uniformly, randomly chosen according to degree distribution function and are exclusive-or combinations of the information symbols. The relation between the information symbols and the encoded symbols can be modeled as a sparse bipartite graph, as shown in Fig. 1. The encoding process for LT codes can be described as follows:

1) Randomly draw a degree d according to the degree distribution;

2) Choose uniformly at random d distinct information symbols;

3) Compute the encoding symbols as the bit wise exclusive-or of these d information symbols.

Fig. 1 Tanner graph of LT codes

The performance of this coding scheme is

associated with a well-designed degree distribution. The degrees are associated to encoded symbols and the probability distribution function from which it is sampled is also referred as degree distribution. The goal of the decoder is to recover the original information from the received information of the channel. Two different decoding strategies are used in LT codes: Gaussian elimination and belief propagation (BP). Gaussian elimination is prohibitively expensive due to its computational complexity; therefore, belief propagation is a kind of de facto decoding algorithm for LT codes.

When K encoding symbols arrive at the receiver side, BP algorithm is iteratively used to recover the original information. Initially, all the encoding symbols of degree one cover their unique neighbor, and the set of covered information symbols that are not currently processed is called ripple. Processing symbols refer to one by one removal of the symbols from the ripple through exclusive-or operation. At each subsequent step, one information symbol is processed, which is then removed as a neighbor from all other encoding symbols. To get the decoding algorithm started, at least one encoding symbol of degree one is required. The decoding process works as follows:

1) Identify the degree one encoded symbols and recover its neighbor symbols.

2) The recovered symbols are the exclusive-or of the remaining encoded symbols that also have those information symbols as their neighbor.

3) The recovered information symbols are removed from the neighbor from each of these encoded symbols with a decrease of one degree to reflect the removal.

4) Steps 1 through 3 are repeated to recover all the information symbols.

Note that the decoding process will only succeed, when there are degree one encoded symbols in the ripple, otherwise the decoding process will halt with decoding failure. The decoding is successful when all information symbols are recovered and ripple is empty at the end of the decoding process. It is important to maintain the size of the ripple during the decoding process. If the size of the ripple becomes zero at any stage of the decoding process, the decoding will halt with decoding failure. The evolution of the ripple is determined by the degree

J. Cent. South Univ. (2013) 20: 2693−2699

2695

distribution. Relation between the degree of an encoded symbol and the point of release is proposed in Proposition 7 in Ref. [2]. Figure 2 shows a plot of the release probability, which expresses the release probability as a function of unprocessed information symbols L and the degree d. Note that the release probability in the decoding process refers to the point, where the symbols are reduced to one of the input symbol and added to the ripple.

Fig. 2 Release probability as a function of decoding step for

fixed degrees (d)

Degree distribution induces a measured randomness

in this coding scheme, making this scheme ideal for heterogeneous environment. The performance of LT codes is associated with a well-designed degree distribution, which have following design goals:

1) The original information symbols are recovered with the higher probability and minimum number of encoded symbols.

2) Degree values of the encoded symbols should be lowered as they account for number of exclusive-or operation which accumulates the computational complexity.

Two degree distributions are proposed in Ref. [2]. The ISD is optimal in terms of overhead when all the random variables follow expected behavior. The expected degree of an encoding symbol is nearly lnk and the sum of the degrees of all the encoding symbols is around k·lnk to recover all the encoding symbols. The behavior of ISD is ideal, when the ripple size is kept constant and it fails with a small variation in the ripple. The RSD is a modified version of the ISD, in which the expected ripple size is about ln(k/δ) k mainly to avoid any variance in the ripple. The LT process is successful with an overhead of O(ln2(k/δ) k ) when RSD is used. Performance of the RSD increases, when the length of the information symbol approaches infinity. Thus, RSD is practically viable but not optimal. ISD and RSD are plotted on the logarithmic scale in Fig. 3.

The exclusive-or operation performed in the LT

Fig. 3 Degree distribution function of ISD and RSD on

logarithmic Scale

codes encoding and decoding process can be related to the sum of degree values of the encoded symbols:

n

K

i

nidi

xE1

)(xor (1)

where μ(di) is any degree distribution function at degree di with any expected encoded symbol x at instant i with n degree value. Exor is the sum of all exclusive-or operations, which accounts for the computational complexity in the LT codes. The encoded symbols should have higher degrees to cover all the information symbols but higher degrees will account for higher number of exclusive-or operation and intuitively, increase the computational complexity. The degree distribution ensures that all the information symbols are covered in the encoded symbols. Also, it keeps the size of the ripple small, to avoid any redundant coverage of the information symbols in the encoded codewords. 3 Framework for optimized degree

distribution

This section provides detail about the optimization framework carried out for degree distribution in LT codes. The degree distribution is studied as a function of power law and its impact on the LT codes in terms of computational complexity and overhead is considered. 3.1 Analysis of LT process

The analysis and role of the degree distribution have significant importance in LT codes. From the encoding and decoding procedures of LT codes, important characteristics of LT codes are identified. The encoding and decoding of LT codes significantly depend on the degree distribution and follow a specific pattern. The encoding and decoding patterns of LT codes can be organized into a specific data structure called Tanner

J. Cent. South Univ. (2013) 20: 2693−2699

2696

graph. Tanner graph is a bipartite graph, where the nodes in the first set are the information symbols and those in the second set are the encoded symbols, as shown in Fig. 1. Bipartite graph can be considered as a network that consists of vertices: the nodes representing the information symbols and the information edges representing the encoded symbols. Then, the power law asserts the probability that arbitrary information edge has the degree n, which is proportional to n−β for some exponent β≥1 [9]. Β is a scaling exponent; mathematically,

P[n] n−β (2)

These random bipartite graphs generated in LT

codes using power law degree distribution function closely resemble with real world complex networks which include world wide web (WWW), metabolic and protein network, sizes of earth quakes, etc, having large practical applications. Degree distribution following the power law can be easily identified by plotting the quantity on the logarithmic scale, which is a straight line. Note that the ISD and RSD, plotted on the logarithmic scale are straight lines, as shown in Fig. 2. The analysis of the degree distribution function categorizes the LT codes in two different categories: 1) scale free LT codes (SFLT), and 2) random LT codes (RLT).

When the degree distribution follows power law, the resulting network formed by the nodes and information edges can be referred as SFLT codes. In other cases, when the degree distribution function follows the Poisson distribution, the network generated in such a case will resemble a random network. In LT codes, the SFLT code generated is one-dimensional i.e., the bipartite graph is a process of growing graphs by adding nodes and edges at a time. The simple case is dividing the time according to the number of information edges and combining all the nodes in a super node (information edges) by exclusive-or operation. A bipartite graph, when being generated according to power law degree distribution with the same exponent regardless of the choice of time scale, is referred as scale free. When LT codes use such scale free network for generating the encoded information, they can be referred as SFLT codes. Note that the bipartite graph should be invariant with respect to time in the sense that if the length of the encoded symbols is changed, the modified graph should satisfy the power law with the same exponent for the degrees.

There is a major topological difference between the random and scale free graphs, as in random graphs generated using Poisson distribution, most nodes have about the same number of information edges [10]. In scale free graphs, the nodes with only a few information edges are numerous, but few nodes have large number of

information edges. 3.2 Optimized degree distribution

In this section, two optimized degree distribution functions are proposed that follow the power law, a degree distribution for SFLT codes.

There are two mechanisms that cause the power law based bipartite graphs. First, most graphs grow through the addition of new nodes that link to the nodes already present in the system. Secondly, in preferential attachment, there is a higher probability to link to a node with large number of connections and thus large degrees appear. The growth and preferential attachment inspire the development of SFLT codes. Two distinct degree distribution functions are proposed that follow the characteristics of SFLT codes and are described below:

Definition 1: Pareto degree distribution (PTD) The proposed degree distribution is a modified

version of Pareto distribution function. Note that the modification in the Pareto distribution is only intended for the successful execution of the LT process in the LT codes. For finite length of encoded symbols K, the bipartite graph for LT codes using a stochastic degree distribution is a fixed graph for finite subset of information symbols k. We assume that each degree d is uniformly and randomly chosen from the degree distribution function and is given by

kdd

y

dn

R

dp

2for ,

1for ,

)(

1

0

(3)

where y>0 is a scale parameter and δ is the shape parameter, which measures the heaviness in the upper tail. The n0 in Eq. (3) is a function of k such that

k

d

dp1

1)( .

Lemma 1: Number of encoding symbols in PTD The number of encoding symbols is

K=k(1+O(−(δ+1)ln(k))). Proof: The necessary condition for successful

decoding with Eq. (3) is that every information symbol in the tanner graph should be connected to at least one encoded symbol. Let us estimate the expected number of encoded symbols that satisfy this condition.

d

k

d d

y

n

RkdpkdpE )()()]([

21

0

We assume that n0<<k and y, δ are some suitable

constants throughout the degree distribution function for different length of information symbol k. Then,

E[p(d)]=ρ+δkyδH(k) (4)

where ρ is a constant value; H(k) is any k-th harmonic

J. Cent. South Univ. (2013) 20: 2693−2699

2697

number of order δ+1.

H(k)=klnk+γk+O(k

1)

where γ≈0.577 is the Euler’s constant. The harmonic series in Eq. (4) for δ≥2 converges to one. For some constant values of δ and y, the expectation for an encoded symbol is given by

))ln()1(()]([ kOkdpE

Lemma 2: Average degree of an encoding symbol in PTD

The average degree of an encoding symbol is D=O(ln(k)).

Proof:

d

K

d

kOyd

y

n

RddpdD ))(ln()()(

21

0

(5) where ρ is a constant value. It is essential to note that, for any constant value of δ and y, Eq. (5) resembles Eq. (2) to ensure that the degree distribution for any value of k follows the SFLT codes.

Definition 2: Power degree distribution (PDD) The power degree distribution (PDD) is a heuristic

study, based on the parabolic distribution function. The parabolic distribution function is given by

} , 2, {1, ,14)1(

)1(6)( 0

000

200 dd

ddd

dddd

)(

)( (6)

From the maximum to minimum values of d, the

above function gives a monotonic parabolic shape. This function has a maximum return at d=1 and minimum at d0≤k. d0 is the free parameter (in general a function of k) in the above distribution, which is responsible for fine tuning. Also, the sum of all φ(d) for values of d0 is equal to one. Simulation of degree distribution using parabolic distribution in LT codes shows poorer performance than conventional degree distribution functions. Thus, modified form of Eq. (6) is used in the PDD function, which is given by

1]5

4[for )1()1(

]5

4[2for

)(1

1

1for

)(

0

Rkdk

dcd

kd

d

dn

R

d

(7) where the value of n0 is chosen such that the above degree distribution is nearly equal to one and for a suitable constant c<<1. To ensure that the ripple starts off at the initial phase of the LT process, θ(d) is initialized at a reasonable size. The other two segments of the degree distribution function are associated with parabolic distribution with some high degree symbols available at the end to ensure the success of the LT process. Figure 4

shows the power and Pareto distribution function on the logarithmic scale.

Fig. 4 Degree distribution function of power and Pareto

distribution on logarithmic scale

Lemma 3: Number of encoding symbols for PDD The number of encoding symbols in LT codes for

PDD is given by K = k(1+O(ln2(k)))

Proof: The expected number of encoded symbols using the PDD degree distribution can be calculated by

d

k

d dkdkdE

15

4

2 ))(1(

1)()]([

(8)

The n0 is a function of k and thus has a constant

expected value. Note that the last section of the distribution function recursively uses the previous values and is ignored in the expected values.

For some suitable constant values of d0 such that 1≤d0≤1.5 according to the length of the information symbols, the expected value is given by

15

4

2

22

2))((ln

)54(8

257024

)1()]([

k

d

kOk

kk

d

kdE

(9) Lemma 4: Average degree of an encoded symbol

for PDD The average degree of an encoded symbol for PDD

is given by D=O(d0/8). Proof:

d

dddpE )()]([

d ddd

dddd

1)4(1)(

)1(6

000

200 )(

8)14(2

)1(3 0

0

00 d

d

dd

(10)

The advantage of such degree distribution ensures

low degree encoded symbols that can help the LT process in the successful decoding with lower computational

J. Cent. South Univ. (2013) 20: 2693−2699

2698

complexity. Also, some more degree distributions can be defined and their further properties can be determined as presented in Ref. [11]. 4 Performance evaluation

In this section, the performance of the proposed degree distribution is presented. Initially, the objective and simulation parameters used for the simulation work are discussed. Finally, simulation results are presented and their behavior is discussed. 4.1 Objectives

In this work, degree distribution is mainly optimized to reduce overhead α and reduce the computational complexity for an efficient implementation of LT codes. The number of encoded symbols that can recover the original information is given by K=(1+α)k. A comparison of number of encoded symbols required in terms of overhead with respect to number of information symbols is shown in Fig. 5(a). Note that additional numbers of encoded symbols that are necessary to cover all the bins (information symbols in decoding process) are neglected according to classical ball and bins process, to ensure the collection of encoded symbols needed to recover the original information.

Also, the computational complexity indicator (CCI) is plotted against the information symbols k. CCI can be defined as

ikREC /xor (11)

where R is the code rate, Exor is the sum of all exclusive-or operations in the decoding of LT codes, k is the information symbols and i is the number of iterations associated with each information symbol. The CCI will increase with an increase in the length of the information symbols as more exclusive-or operations are performed. A fair comparison of CCI versus the information symbols are presented in Fig. 5(b).

The redundancy in LT code is traded to provide better convergence towards the information symbols and additional redundant symbols increase the overhead. The convergence of the received encoded symbols is compared with different overhead in the LT codes for an ideal channel using belief propagation algorithm for LT codes [12] in Fig. 6. Ideal channel refers to an environment where the encoded symbols are not affected by the channel and thus the error occurring in the system is associated with the performance of the encoding and decoding algorithms. The bit error rate (BER) versus overhead curve, shown in Fig. 6, demonstrates the performance of the decoding algorithm for particular overhead using different degree distribution functions. The simulation parameters considered in this work for LT codes using PTD are δ=2 and y=0.96. For LT

Fig. 5 Performance of degree distributions for LT codes:

(a) Average overhead vs information symbols;

(b) Computational complexity indicator information symbols

Fig. 6 BER vs overhead using BP decoding for LT codes

codes using PDD, we use d0=1.115 and c=0.003. RSD is simulated with parameters c=0.1 and δ=1, since they provide the smallest average overhead [13] with

42 kR , as suggested in Ref. [4]. The numbers of encoded symbols are compared with the information symbols at k=256, 512, 768, 1 024, 1 280, 1 536, 1 792 and 2 048 in Fig. 5 with 1 000 randomly independent simulation runs.

J. Cent. South Univ. (2013) 20: 2693−2699

2699

4.2 Simulation results Figure 5(a) shows a comparison of the encoded

symbols versus information symbols and demonstrates the performance of the proposed degree distribution functions. From Fig. 5(a), the number of encoded symbols for a given information symbols decreases significantly as the length of the information symbols increases. Also, the comparison of the encoded symbols for particular information symbol with different degree distributions is evident and the proposed schemes significantly perform better than RSD based LT codes. There is nearly a difference of 0.062, 0.092 for PTD and PDD in the overhead factor, compared with the RSD for given number of information symbols. Thus, there are 32% and 48% improvements on average in the performance of the LT codes, compared with the LT codes based on RSD.

Figure 5(b) indicates the performance of LT codes in terms of computational complexity for LT codes using different degree distribution functions. It is an essential parameter of performance, which demonstrates the network complexity of the LT codes. Note that the network complexity significantly impacts different constraints of physical implementation of this coding scheme.

CCI in Fig. 5(b) represents the normalized values of the computational complexity according to Eq. (11) versus the number of information symbols. The PDD performs significantly better in terms of CCI than the other two degree distribution functions. There are average differences of 0.057 and 0.197 for PTD and PDD, respectively. Intuitively, there are average improvements of 1.3% and 4.62% for PTD and PDD, respectively, than that of RSD based LT codes.

Figure 6 demonstrates the BER versus overhead. PDD and PTD, used in LT codes, recover the original information with higher probability compared to RSD, when BP decoding algorithm for LT codes is used. The results shown in Fig. 6 are average of 1 000 simulation runs with the length of information symbols k=1 000. 5 Conclusions

Degree distribution function used in LT codes is analyzed from the perspective of power law. The analysis of the degree distribution classifies the LT codes into broad categories, which include SFLT and RLT codes. Two distinct degree distribution functions: the PTD and PDD are proposed and analyzed for SFLT codes. The

degree distribution functions proposed are analyzed from the perspective of overhead and computational complexity involved in the LT codes. Simulation results for the proposed degree distribution functions show significant improvement in terms of overhead and computational complexity compared to RSD based LT codes. The presented work can be extended to Raptor codes and other network coding schemes for their efficient implementation. References [1] BYERS J, LUBY M, MITZENMACHER M, REGE A. A digital

fountain approach to reliable distribution of bulk data [C]//

Proceeding of ACM SIGCOMM. Vancouver, 1998, 55−67.

[2] LUBY M. LT Codes [C]// Proceedings of the 43rd Annual IEEE

Symposium on the Foundations of Computer Science. Vancouver,

Canada, 2002: 271−280.

[3] TARUS H, BUSH J, IRVINE J, DUNLOP J. Exploiting redundancies

to improve performance of LT decoding [C]// Proceeding of the 6th

Annual Conference on Communication Networks and Services

Research (CNSR 2008). Halifax, 2008, 568−573.

[4] HYYTIÄ E, TIRRONEN T, VIRTAMO J. Optimizing the degree

distribution of LT codes with an importance sampling approach [C]//

6th International Workshop on Rare Event Simulation. Bamberg,

2006, 64−73.

[5] ZHU H, ZHANG G, LI G. A novel degree distribution algorithm of

LT codes [C]// the 11th IEEE International Conference on

Communication Technology. Hangzhou, 2008.

[6] SØRENSEN J H, POPOVSKI P, ØSTERGAARD J. Design and

analysis of LT codes with decreasing ripple size [J]. IEEE

Transactions on Communication, 2012, 60(11): 3191−3197.

[7] CHEN C.M, CHEN Y P, SHEN T C, ZAO J K. Optimizing the

degree distributions in LT codes by using the multi-objective

evolutionary algorithm based on decomposition [C]// Proceeding of

the IEEE Congress on Evolutionary Computation. Barcelona, 2010,

3635−3642.

[8] CHEN C M, CHEN Y P, SHEN T C, ZAO J K. A practical

optimization frame for the degree distribution in LT codes [R].

NCLab Report No. NCL-TR-2011001, 2011.

[9] BARABASI A L, DEZSO Z, RAVASZ E, YOOK S H, OLTVAI Z.

Scale free and hierarchical structures in complex networks [C]//

American Institute of Physics (AIP) Conference Proceedings.

Granada, 2002: 1−16.

[10] NEWMAN M E J. Power laws pareto distributions and Zipf's law

[J]. Contemporary Physics, 2005, 46(5): 323−351.

[11] LEI Min, ZHAO Oing-gui, HOU Zheng-ting. Three vertex degree

correlations of fixed act-size collaboration networks [J]. Journal of

Central South University, 2011, 18(3): 830−833.

[12] STOCKHAMMER T, JENKAC H, MAYER T, XU W. Soft decoding

of LT codes for wireless broadcast [C]// Proceeding of IST Mobile.

Dresden, Germany, 2005, 262−264.

[13] UYEDA F, XIA H, CHEN A A. Evaluation of a high performance

erasure code implementation [R]. University of California, San

Diego, 2004.

(Edited by YANG Bing)