Research Article A New Approach for Chaotic Time Series...

10
Research Article A New Approach for Chaotic Time Series Prediction Using Recurrent Neural Network Qinghai Li and Rui-Chang Lin Department of Electronic Engineering, Zhejiang Industry and Trade Vocational College, East Road 717, Wenzhou 325003, China Correspondence should be addressed to Rui-Chang Lin; [email protected] Received 4 July 2016; Accepted 31 October 2016 Academic Editor: Lijun Zhu Copyright Β© 2016 Q. Li and R.-C. Lin. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. A self-constructing fuzzy neural network (SCFNN) has been successfully used for chaotic time series prediction in the literature. In this paper, we propose the strategy of adding a recurrent path in each node of the hidden layer of SCFNN, resulting in a self- constructing recurrent fuzzy neural network (SCRFNN). is novel network does not increase complexity in fuzzy inference or learning process. Specifically, the structure learning is based on partition of the input space, and the parameter learning is based on the supervised gradient descent method using a delta adaptation law. is novel network can also be applied for chaotic time series prediction including Logistic and Henon time series. More significantly, it features rapider convergence and higher prediction accuracy. 1. Introduction A chaotic time series can be expressed as a deterministic dynamical system that however is usually unknown or incompletely understood. erefore, it becomes important to make prediction from the experimental observation of a real system. e technology was widely studied in many science and engineering fields such as mathematical finance, weather forecasting, and intelligent transport and trajectory forecast- ing. e early research on nonlinear prediction of chaotic time series can be found in [1]. Most recent work mainly focuses on different methods for improving the prediction performance, such as fuzzy neural networks [2, 3], RBF [4, 5], recurrent neural networks [6, 7], back-propagation recurrent neural networks [8], predictive consensus networks [9, 10], and biologically inspired neuronal network [11, 12]. Both fuzzy logic and artificial neural networks are poten- tially suitable for nonlinear chaotic series prediction as they can perform complex mappings between their input and out- put spaces. In particular, the self-constructing fuzzy neural network (SCFNN) [13] is capable of constructing a simple network without the need of knowledge to the chaotic series. is capability is due to SCFNN’s ability of self-adjusting the location of the input space fuzzy partition, so there is no need to estimate in advance the series states distribution. More- over, carefully setting conditions on the increasing demand of fuzzy rules makes the architecture of the constructed SCFNN fairly simple. ese advantages motivated researchers to build various chaotic series prediction algorithms by employing an SCFNN structure in, for example, [14, 15]. Neural network has wide applications in various areas; see, for example, [16–21]. In particular, recurrent neural network (RNN) has been proved successful in speech pro- cessing and adaptive channel equalization. One of the most important features of RNN is its feedback paths in the circuit that makes it have a sequential rather than a combinational behavior. RNNs were applied not only in the processing of time-varying patterns or data sequences but also in dealing with the dissonance of input pattern when the possibly dif- ferent outputs are generated by the same set of input patterns due to the feedback paths. Since RNN is a highly nonlinear dynamical system that exhibits rich and complex behaviors, it is expected that RNN possesses better performance than traditional signal processing techniques in modeling and predicting chaotic time series. Some works on improving the performance of chaotic series prediction using RNNs can be found in, for example, [6, 7]. Hindawi Publishing Corporation Mathematical Problems in Engineering Volume 2016, Article ID 3542898, 9 pages http://dx.doi.org/10.1155/2016/3542898

Transcript of Research Article A New Approach for Chaotic Time Series...

Page 1: Research Article A New Approach for Chaotic Time Series ...downloads.hindawi.com/journals/mpe/2016/3542898.pdfof developing a dynamic model by the observed time series for a nonlinear

Research ArticleA New Approach for Chaotic Time Series Prediction UsingRecurrent Neural Network

Qinghai Li and Rui-Chang Lin

Department of Electronic Engineering, Zhejiang Industry and Trade Vocational College, East Road 717, Wenzhou 325003, China

Correspondence should be addressed to Rui-Chang Lin; [email protected]

Received 4 July 2016; Accepted 31 October 2016

Academic Editor: Lijun Zhu

Copyright Β© 2016 Q. Li and R.-C. Lin. This is an open access article distributed under the Creative Commons Attribution License,which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

A self-constructing fuzzy neural network (SCFNN) has been successfully used for chaotic time series prediction in the literature.In this paper, we propose the strategy of adding a recurrent path in each node of the hidden layer of SCFNN, resulting in a self-constructing recurrent fuzzy neural network (SCRFNN). This novel network does not increase complexity in fuzzy inference orlearning process. Specifically, the structure learning is based on partition of the input space, and the parameter learning is basedon the supervised gradient descent method using a delta adaptation law. This novel network can also be applied for chaotic timeseries prediction including Logistic andHenon time series.More significantly, it features rapider convergence and higher predictionaccuracy.

1. Introduction

A chaotic time series can be expressed as a deterministicdynamical system that however is usually unknown orincompletely understood.Therefore, it becomes important tomake prediction from the experimental observation of a realsystem. The technology was widely studied in many scienceand engineering fields such as mathematical finance, weatherforecasting, and intelligent transport and trajectory forecast-ing. The early research on nonlinear prediction of chaotictime series can be found in [1]. Most recent work mainlyfocuses on different methods for improving the predictionperformance, such as fuzzy neural networks [2, 3], RBF [4, 5],recurrent neural networks [6, 7], back-propagation recurrentneural networks [8], predictive consensus networks [9, 10],and biologically inspired neuronal network [11, 12].

Both fuzzy logic and artificial neural networks are poten-tially suitable for nonlinear chaotic series prediction as theycan perform complex mappings between their input and out-put spaces. In particular, the self-constructing fuzzy neuralnetwork (SCFNN) [13] is capable of constructing a simplenetwork without the need of knowledge to the chaotic series.This capability is due to SCFNN’s ability of self-adjusting thelocation of the input space fuzzy partition, so there is no need

to estimate in advance the series states distribution. More-over, carefully setting conditions on the increasing demand offuzzy rules makes the architecture of the constructed SCFNNfairly simple.These advantagesmotivated researchers to buildvarious chaotic series prediction algorithms by employing anSCFNN structure in, for example, [14, 15].

Neural network has wide applications in various areas;see, for example, [16–21]. In particular, recurrent neuralnetwork (RNN) has been proved successful in speech pro-cessing and adaptive channel equalization. One of the mostimportant features of RNN is its feedback paths in the circuitthat makes it have a sequential rather than a combinationalbehavior. RNNs were applied not only in the processing oftime-varying patterns or data sequences but also in dealingwith the dissonance of input pattern when the possibly dif-ferent outputs are generated by the same set of input patternsdue to the feedback paths. Since RNN is a highly nonlineardynamical system that exhibits rich and complex behaviors,it is expected that RNN possesses better performance thantraditional signal processing techniques in modeling andpredicting chaotic time series. Some works on improving theperformance of chaotic series prediction using RNNs can befound in, for example, [6, 7].

Hindawi Publishing CorporationMathematical Problems in EngineeringVolume 2016, Article ID 3542898, 9 pageshttp://dx.doi.org/10.1155/2016/3542898

Page 2: Research Article A New Approach for Chaotic Time Series ...downloads.hindawi.com/journals/mpe/2016/3542898.pdfof developing a dynamic model by the observed time series for a nonlinear

2 Mathematical Problems in Engineering

From the above observation, it is a novel idea to com-bine the SCFNN and RNN techniques for chaotic seriesprediction, which results in a new architecture called self-constructing recurrent fuzzy neural network (SCRFNN) inthis paper. The structure learning and parameter learningalgorithms in SCRFNN are inherited from those in SCFNN,which maintains the simplicity in implementation. Nev-ertheless, the recurrent path in SCRFNN makes it morecomplex in function deviation and exhibit richer behaviors.Extensive numerical simulation shows that both SCFNNand SCRFNN are effective in predicting chaotic time seriesincluding Logistic series and Henon series. But the latterhas superior performance in convergence rate and predictionaccuracy at the cost of slightly heavier structure (number ofhidden nodes) and fuzzy logic rules.

It is noted that a similar neural network structure has beenused for nonlinear channel equalizers in [20]. The purposeof an equalizer in, for example, wireless communicationsystems, is to recover the transmitted sequence or its delayedversion using a trained neural network. But the chaotic timeseries prediction studied in this paper has a completely differ-entmechanism. Chaotic time series prediction is the problemof developing a dynamic model by the observed time seriesfor a nonlinear chaotic system that exhibits deterministicbehavior with a known initial condition. A neural networkis used to build chaotic time series predication after theso-called phase space reconstruction that is not touched inchannel equalizers.

The rest of this paper is organized as follows. In Section 2,the problem of chaotic series prediction is formulated. Thestructure, the inference output, and the learning algorithmsof SCRFNN are described in Section 3. Numerical simulationresults on two classes of Logistic and Henon chaotic seriespredictions are presented in Section 4. Finally, the paper isconcluded in Section 5.

2. Background and Preliminaries

The phase space reconstruction theory is commonly used forchaotic time series prediction; see, for example, [22, 23]. Themain idea is to find away for phase space reconstruction fromtime series and then conduct prediction in phase space. Thetheory is briefly revisited in this section followed by a generalneural network prediction model.

Delay embedding is the main technique for phase spacereconstruction. Assume a chaotic time series is representedas π‘₯1, π‘₯2, . . . , π‘₯𝑛 and the phase space vector

𝑋𝑖 = (π‘₯𝑖, π‘₯𝑖+𝜏, . . . , π‘₯𝑖+(π‘šβˆ’1)𝜏)𝑇 , 𝑖 = 1, 2, . . . ,𝑀. (1)

The parameter 𝜏 is the delay, π‘š the embedding dimension,and 𝑀 = 𝑛 βˆ’ (π‘š βˆ’ 1)𝜏 the number of phase vectors. A pre-diction model contains an attractor that warps the observeddata, in the phase space, and provides precise informationabout the dynamics involved. Therefore, it can be used topredict 𝑋𝑖+1 from 𝑋𝑖 in phase space and hence synthesizeπ‘₯𝑖+1 using time series reconstruction. The procedure can besummarized as a model with input 𝑋𝑖 and output π‘₯𝑖+1.

Takens’ theorem provides the conditions under which asmooth attractor can be reconstructed from the observationsmade with a generic function. The theorem states that, if theembedding dimension π‘š β‰₯ 2𝑑 + 1, where 𝑑 is the dimensionof the system dynamics, then the phase space constituted bythe original system state variable and the dynamic behaviorsin the one-dimensional sequence of observations are equiva-lent. It is equivalent because the chaotic attractor differentialin the two spaces is homeomorphism. The reconstructedsystem that includes the evolution information of all statevariables is capable of calculating the future state of the systembased on its current state, which provides a basic mechanismfor chaotic time series prediction.

As explained before, the prediction model with input 𝑋𝑖and output π‘₯𝑖+1 provides precise information of the nonlineardynamics under consideration. Neural network (NN) is anappropriate structure to build a nonlinear model of chaotictime series predication. Specifically, a typical three-layer NNis discussed below.

When we apply a three-layer NN to predict a chaotictime series, better prediction performance can be achievedif the number of neurons of the input layer is equal to theembedding dimension of the phase space reconstructed bythe chaotic time series. Specifically, let the number of neuronsof the input layer be π‘š, that of the hidden layer be 𝑝, andthat of output layer be 1. Then, the NN describes a mapping𝑓 : Rπ‘š β†’ R1.

The input for the nodes of the hidden layer is

𝑆𝑗 = π‘šβˆ‘π‘–=1

πœ”π‘–π‘—π‘₯𝑖 βˆ’ πœƒπ‘—, 𝑗 = 1, 2, . . . , 𝑝, (2)

whereπœ”π‘–π‘— is the link weight from the input layer to the hiddenlayer and πœƒπ‘— is the threshold. Assume the Sigmoid transferfunction𝑓(𝑠) = 1/(1+π‘’βˆ’π‘ ) is used for theNN.Then the outputof the hidden layer nodes is

𝑏𝑗 = 11 + exp (βˆ’βˆ‘π‘šπ‘–=1 πœ”π‘–π‘—π‘₯𝑖 + πœƒπ‘—) , 𝑗 = 1, 2, . . . , 𝑝. (3)

Similarly, the input and the output of the output layer nodesare

𝐿 = π‘βˆ‘π‘—=1

𝑉𝑗𝑏𝑗 βˆ’ 𝛾,π‘₯𝑖+1 = 11 + exp (βˆ’βˆ‘π‘π‘—=1 𝑉𝑗𝑏𝑗 + 𝛾) , (4)

respectively. Here 𝑉𝑗 is the link weight from the hidden layerto the output layer and 𝛾 is the threshold.

In general, the aforementioned link weights (πœ”π‘–π‘—, 𝑉𝑗) andthe thresholds (πœƒπ‘—, 𝛾) of the NN can be randomly initialized,for example, within [0, 1]. Then, they can be properly trainedsuch that the resulting NN has the capability of preciseprediction. The specific training strategy depends on thespecific architecture of the NN, which is the main scope ofthis research line and will be elaborated in the next section.

Page 3: Research Article A New Approach for Chaotic Time Series ...downloads.hindawi.com/journals/mpe/2016/3542898.pdfof developing a dynamic model by the observed time series for a nonlinear

Mathematical Problems in Engineering 3

x

πœ”12

πœ”24

πœ”22

πœ”23

πœ”11 πœ”31

πœ”21

πœ”32

πœ”14

πœ”13

πœ”34

πœ”33

S2

S1 S3

S4

y3

y1y2

Figure 1: Structure of a recurrent neural network.

3. Self-Constructing Recurrent FuzzyNeural Network

Following the general description of the NN predictionmodel introduced in the previous section, we aim to pro-pose a specific architecture with the features of fuzzy logic,recurrent path, and self-constructing ability, called a self-constructing recurrent fuzzy neural network (SCRFNN).With these features, the SCRFNN is able to exhibit rapidconvergence and high prediction accuracy.

3.1. Background of SCRFNN. The network structure of atraditional fuzzy neural network (FNN) is determined inadvance. During the learning process, the structure is fixedand a supervised back-propagation algorithm is appliedto adjust the membership function parameters and theweighting coefficients. Such an FNN with a fixed structureusually needs a large number of hidden layer nodes foracceptable performance, which significantly increases thesystem complexity [24, 25].

To overcome the aforementioned drawback of a fixedstructure, the SCRFNN used in this paper exploits a two-phase learning algorithm that includes both structure learn-ing andparameter learning. Specifically, the structure of fuzzyrules is determined in the first phase and the coefficientsof each rule are tuned in the second one. It is conceptuallyeasy to sequentially carry out the two phases. However, itis suitable only for offline operations with a large amountof representative data collected in advance [26]. Moreover,independent realization of structure and parameter learningis time-consuming. These disadvantages can be eliminatedin SCRFNN as the two phases of structure learning andparameter learning are conducted concurrently [27].

The main feature of the proposed SCRFNN is the novelrecurrent path in the circuit. The schematic diagram of arecurrent neural network (RNN) is shown in Figure 1 [28],where, for example, π‘₯ and π‘¦π‘˜, π‘˜ = 1, 2, 3, denote the external

Layer 3

Layer 4

x1 x2

Layer 1

Layer 2

O(1)1 (t) O(1)

2 (t)

O4(t)

O(3)1 (t)

βˆ‘

∏ ∏∏

πœ”1

πœ”2 πœ”M

Β· Β· Β·

O(3)2 (t) O(3)

M (t)

O(2)21 (t)O(2)

12 (t)O(2)11 (t) O(2)

22 (t) O(2)M2(t)O(2)

M1(t)

zβˆ’1 zβˆ’1 zβˆ’1 zβˆ’1 zβˆ’1 zβˆ’1Β· Β· Β·

πœ”R11 πœ”R12πœ”R21

πœ”R22πœ”RM1

πœ”RM2

Figure 2: Schematic diagram of SCRFNN.

input and the unit outputs, respectively.The dynamics of sucha structure can be described by, for π‘˜ = 1, 2, 3,

π‘ π‘˜ [𝑑 + 1] = π‘›βˆ‘π‘™=1

πœ”π‘˜π‘™ [𝑑] 𝑦𝑙 [𝑑] + π‘šβˆ‘π‘™=1

πœ”π‘˜,𝑙+𝑛 [𝑑] π‘₯net𝑙 [𝑑] ,

π‘¦π‘˜ [𝑑 + 1] = 𝑓 (π‘ π‘˜ [𝑑 + 1]) , (5)

where π‘š and 𝑛 denote the numbers of external inputs andhidden layer units, respectively, πœ”π‘–π‘—[𝑑] is the connectionweight from the 𝑗th unit to the 𝑖th unit at the time instant 𝑑,and the activation function 𝑓(β‹…) can be any real differentiablefunction. Clearly, the output of each unit depends on theprevious external inputs to the network aswell as the previousoutputs of all units. The training algorithms for the recurrentparameters of RNNs have been well studied, for example, thereal-time recurrent learning (RTRL) algorithm [29].

3.2. Structure and Inference Output of SCRFNN. The sche-matic diagram of SCRFNN is shown in Figure 2. The fuzzylogic rule and the functions in each layer are briefly describedas follows.

The fuzzy logic rule adopted in the SCRFNN has thefollowing form:

𝑅𝑗: If (π‘₯1 + πœ”π‘…π‘—1𝑂(2)𝑗1 (𝑑 βˆ’ 1)) is 𝐴𝑗1,and (π‘₯2 + πœ”π‘…π‘—2𝑂(2)𝑗2 (𝑑 βˆ’ 1)) is 𝐴𝑗2, . . . ,and (π‘₯𝑛 + πœ”π‘…π‘—π‘›π‘‚(2)𝑗𝑛 (𝑑 βˆ’ 1)) is 𝐴𝑗𝑛.Then 𝑦 (𝑑) = πœ”π‘—

(6)

with 𝑛 input variables and a constant consequence πœ”π‘—. In(6), 𝐴𝑗𝑖 is the linguistic term of the precondition part with

Page 4: Research Article A New Approach for Chaotic Time Series ...downloads.hindawi.com/journals/mpe/2016/3542898.pdfof developing a dynamic model by the observed time series for a nonlinear

4 Mathematical Problems in Engineering

membership function πœ‡π΄π‘—π‘– , π‘₯𝑖’s and 𝑦 denote the inputand output variables, respectively, and πœ”π‘…π‘—π‘– and 𝑂(2)𝑗𝑖 (𝑑 βˆ’ 1)represent the recurrent coefficient and the last state outputof the 𝑗th term associated with the 𝑖th input variable.

The nodes in Layer 1, called the input nodes, simplypass the input signals to the next layer. There are two inputvariables π‘₯1 and π‘₯2 and two corresponding outputs 𝑂(1)1 (𝑑) =π‘₯1 and 𝑂(1)2 (𝑑) = π‘₯2 for the problem considered in this paper.

Each node in Layer 2 acts as a linguistic label for the inputvariables from Layer 1. Let 𝑂(2)𝑗𝑖 (𝑑) be the output of the 𝑗thrule associated with the 𝑖th input node in status 𝑑 and πœ”π‘…π‘—π‘– ’sthe recurrent coefficients. Then, the Gaussian membershipfunction is determined in this layer as follows:

𝑂(2)𝑗𝑖 (𝑑)= exp(βˆ’(𝑂(1)𝑖 (𝑑) + πœ”π‘…π‘—π‘–π‘‚(2)𝑗𝑖 (𝑑 βˆ’ 1) βˆ’ π‘šπ‘—π‘–)2𝜎2𝑗𝑖 ) (7)

with the mean π‘šπ‘—π‘– and the standard deviation πœŽπ‘—π‘–.Each node in Layer 3, represented by the product symbolΞ , works as the precondition part of the fuzzy logic rule.

Specifically, the output of the 𝑗th rule node in status 𝑑 is thusexpressed as

𝑂(3)𝑗 (𝑑) = βˆπ‘–

𝑂(2)𝑗𝑖 (𝑑) . (8)

The single node in Layer 4, called the output node, acts asa defuzzifier. As a result, the final output of the SCRFNN is

𝑂(4) (𝑑) = βˆ‘π‘—

πœ”π‘—π‘‚(3)𝑗 (𝑑) , (9)

where the link weight πœ”π‘— is the output action strengthassociated with the 𝑗th rule.

3.3. Online Learning Algorithm for SCRFNN. As discussedin Section 3.1, a two-phase learning algorithm for structurelearning and parameter learning is used for SCRFNN. Theinitial SCRFNN is composed of the input and output nodesonly. The membership and the rule nodes are dynamicallygenerated and adjusted according to the online data byperforming the structure and parameter learning processes.These two phases are explained below.

The structure learning algorithm aims to find the properinput space fuzzy partitions and fuzzy logic rules withminimal fuzzy sets and rules. As the initial SCRFNN containsno membership or rule node, the main work in structurelearning is to decide whether it is necessary to add a newmembership function node in Layer 2 and the associatedfuzzy logic rule in Layer 3. The criterion for generating anew fuzzy rule for new incoming data is based on the firingstrengths 𝑂(3)𝑗 (𝑑) = βˆπ‘–π‘‚(2)𝑗𝑖 (𝑑), 𝑗 = 1, . . . ,𝑀, where 𝑀 is thenumber of existing rules.

Specifically, if the maximum degree πœ‡max =max1≀𝑗≀𝑀𝑂(3)𝑗 (𝑑) is not larger than a prespecified thresholdparameter πœ‡min, a new membership function needs to begenerated. Also, the mean value and the standard deviationof the new membership function are, respectively, assignedas π‘šnew𝑖 = π‘₯𝑖 and 𝜎new

𝑖 = πœŽπ‘–, where π‘₯𝑖 is the new incomingdata and πœŽπ‘– is an empirical prespecified constant. The valueπœ‡min is initially chosen between 0 and 1 and then keepsdecaying in order to limit the growing size of the SCRFNNstructure.

The parameter learning algorithm aims to minimize apredefined energy function by adaptively adjusting the vectorof network parameters based on a given set of input-outputpairs. The particular energy function used in the SCRFNN isas follows:

𝐸 = 12 (𝑂𝑑 βˆ’ 𝑂(4) (𝑑))2 , (10)

where 𝑂𝑑 is the desired output associated with the inputpattern and 𝑂(4)(𝑑) is the inferred output. The vector isadjusted along the negative gradient of the energy functionwith respect to the vector. In the four-layer SCRFNN, a back-propagation learning rule is adopted as the gradient vectoris calculated in the direction opposite to the data flow, asdescribed below.

First, the link weight πœ”π‘— for the output node in Layer 4 isupdated along the negative gradient of the energy function;that is,

Ξ”πœ”π‘— = πœ•πΈπœ•πœ”π‘— = βˆ’ (𝑂𝑑 βˆ’ 𝑂(4) (𝑑))𝑂(3)𝑗 (𝑑) . (11)

In particular, it is updated according to

πœ”π‘— (π‘˜ + 1) = πœ”π‘— (π‘˜) βˆ’ πœ‚πœ”Ξ”πœ”π‘—, (12)

where π‘˜ is the pattern number of the 𝑗th link and the factorπœ‚πœ” is the learning-rate parameter.Next, in Layer 3, the meanπ‘šπ‘—π‘– and the standard deviationπœŽπ‘—π‘– of the membership functions are updated by

π‘šπ‘—π‘– (π‘˜ + 1) = π‘šπ‘—π‘– (π‘˜) βˆ’ πœ‚π‘šΞ”π‘šπ‘—π‘–,πœŽπ‘—π‘– (π‘˜ + 1) = πœŽπ‘—π‘– (π‘˜) βˆ’ πœ‚πœŽΞ”πœŽπ‘—π‘–, (13)

Page 5: Research Article A New Approach for Chaotic Time Series ...downloads.hindawi.com/journals/mpe/2016/3542898.pdfof developing a dynamic model by the observed time series for a nonlinear

Mathematical Problems in Engineering 5

with the learning-rate parameters πœ‚π‘š and πœ‚πœŽ. The terms Ξ”π‘šπ‘—π‘–and Ξ”πœŽπ‘—π‘– are also calculated as the gradient of the energyfunction as follows:

Ξ”π‘šπ‘—π‘– = πœ•πΈπœ•π‘šπ‘—π‘– = βˆ’ (𝑂𝑑 βˆ’ 𝑂(4) (𝑑)) πœ”π‘—π‘‚(3)𝑗 (𝑑)β‹… 2 (𝑂(1)𝑖 (𝑑) + πœ”π‘…π‘—π‘–π‘‚(2)𝑗𝑖 (𝑑 βˆ’ 1) βˆ’ π‘šπ‘—π‘–)𝜎2𝑗𝑖 ,

Ξ”πœŽπ‘—π‘– = πœ•πΈπœ•πœŽπ‘—π‘– = βˆ’ (𝑂𝑑 βˆ’ 𝑂(4) (𝑑)) πœ”π‘—π‘‚(3)𝑗 (𝑑)β‹… 2 (𝑂(1)𝑖 (𝑑) + πœ”π‘…π‘—π‘–π‘‚(2)𝑗𝑖 (𝑑 βˆ’ 1) βˆ’ π‘šπ‘—π‘–)2𝜎3𝑗𝑖 .

(14)

Finally, the variation of recurrent coefficient Ξ”πœ”π‘…π‘—π‘– inLayer 2 is updated by

πœ”π‘…π‘—π‘– (π‘˜ + 1) = πœ”π‘…π‘—π‘– (π‘˜) βˆ’ πœ‚π‘…Ξ”πœ”π‘…π‘—π‘– , (15)

where

Ξ”πœ”π‘…π‘—π‘– = πœ•πΈπœ•πœ”π‘…π‘—π‘– = (𝑂𝑑 βˆ’ 𝑂(4) (𝑑)) πœ”π‘—(βˆπ‘–,𝑖 =𝑗

𝑂(2)𝑗𝑖 (𝑑))β‹… 2 (𝑂(1)𝑖 (𝑑) + πœ”π‘…π‘—π‘–π‘‚(2)𝑗𝑖 (𝑑 βˆ’ 1) βˆ’ π‘šπ‘—π‘–)2𝜎3𝑗𝑖 𝑂(2)𝑗𝑖 (𝑑 βˆ’ 1)

(16)

is again the gradient of the energy function.

4. Simulation Results

In order to evaluate the effectiveness of the proposedSCRFNN, we apply it on two benchmark chaotic time seriesdata sets: Logistic series and Henon series. The number ofdata used for each benchmark problem is 2000. In particular,we use the first 1000 data for training and the remaining1000 for validation. It will be shown that both SCFNN andSCRFNNare effective in predicting Logistic series andHenonseries. But the latter has superior performance in convergencerate and prediction accuracy at the cost of slightly heavierstructure (number of hidden nodes) and rules.

4.1. Logistic Chaotic Series. A Logistic chaotic series is gener-ated by the following equation:

π‘₯π‘˜+1 = πœ‡π‘₯π‘˜ (1 βˆ’ π‘₯π‘˜) , π‘₯π‘˜ ∈ [0, 1] , πœ‡ ∈ [0, 4] . (17)

In particular, the parameter πœ‡ is restricted within the range of3.57 < πœ‡ ≀ 4 for chaotic behavior.The training performance for both SCFNN and SCRFNN

is demonstrated in Figure 3, where the profiles with 5, 20, and50 learning cycles are compared in three graphs, respectively.It is observed that the proposed learning algorithm is effectivein terms of fast convergence. In all the three cases, the

1 2 3 4 50

0.02

0.04

0.06

0.08

5 learning cycles

Erro

r

SCRFNNSCFNN

0 5 10 15 200

0.02

0.04

0.06

0.08

20 learning cycles

Erro

rSCRFNNSCFNN

0 10 20 30 40 500

0.02

0.04

0.06

0.08

50 learning cycles

Erro

r

SCRFNNSCFNN

Figure 3: Comparison of convergence for 5, 20, and 50 learningcycles.

convergence rate for the SCRFNN is faster than that for theSCFNN.

Root mean squared error (RMSE) is the main errormetrics for describing the training errors. It is explicitlydefined as follows:

RMSE = ( 1𝑆 βˆ’ 1 π‘†βˆ‘π‘–=1

[𝑃𝑖 βˆ’ 𝑄𝑖]2)1/2 , (18)

where 𝑆 is the number of trained patterns and𝑃𝑖 and𝑄𝑖 are theFNN derived values and the actual chaotic time series data,respectively. Table 1 shows that the RMSEs of SCFNN andSCRFNN completed 5, 20, and 50 learning cycles. In all thethree cases, SCRFNN results in smaller RMSEs than SCFNN.

For the well-trained SCFNN and SCRFNN, the predic-tion performance is compared and shown in Figures 4 and5. It is observed in Figure 4 that the predicted outputs fromthe SCFNN and the SCRFNN well match the real data. More

Page 6: Research Article A New Approach for Chaotic Time Series ...downloads.hindawi.com/journals/mpe/2016/3542898.pdfof developing a dynamic model by the observed time series for a nonlinear

6 Mathematical Problems in Engineering

0 10 20 30 40 50

0.4

0.5

0.6

0.7

0.8

0.9

1

LogisticSCFNN

0 10 20 30 40 50

0.4

0.5

0.6

0.7

0.8

0.9

1

LogisticSCRFNN

Chaotic time series 1251–1300

Chaotic time series 1751–1800

Figure 4: Logistic series and the predictions with SCFNN and SCRFNN.

Table 1: RMSE for SCFNN and SCRFNN completed 5, 20, and 50cycles.

FNN (learning cycles) Logistic HenonSCFNN (5) 7.65499𝑒 βˆ’ 004 0.0015SCRFNN (5) 4.51469𝑒 βˆ’ 004 0.0014SCFNN (20) 5.0209𝑒 βˆ’ 004 0.0011SCRFNN (20) 4.2344𝑒 βˆ’ 004 6.3094𝑒 βˆ’ 004SCFNN (50) 4.7764𝑒 βˆ’ 004 7.8083𝑒 βˆ’ 004SCRFNN (50) 4.189𝑒 βˆ’ 004 4.4918𝑒 βˆ’ 004specifically, the prediction errors are less significant in theSCRFNN than those in the SCFNN, as shown in Figure 5.

Finally, Table 2 shows that the numbers of hidden nodesin SCFNN and SCRFNN completed 5, 20, and 50 learningcycles. The SCRFNN has slightly more hidden nodes in allthe three cases. This heavier structure of SCRFNN, togetherwith the extra rules for recurrent path, is the cost for theaforementioned performance improvement.

4.2. Henon Chaotic Series. Henon mapping was proposed bythe French astronomer Michel Henon for studying globularclusters [30]. It is one of the most famous simple dynamicalsystems with wide applications. In recent years, Henon

Table 2: Number of hidden nodes in SCFNN and SCRFNNcompleted 5, 20, and 50 cycles.

FNN (learning cycles) Logistic HenonSCFNN (5) 4 7SCRFNN (5) 5 7SCFNN (20) 4 7SCRFNN (20) 6 8SCFNN (50) 4 7SCRFNN (50) 5 10

mapping has been well studied in chaos theory. Its dynamicsare given as follows:

π‘₯ (π‘˜ + 1) = 1 + π‘¦π‘˜ βˆ’ π‘Žπ‘₯2 (π‘˜) ,𝑦 (π‘˜ + 1) = 𝑏π‘₯ (π‘˜) . (19)

For example, chaos is produced with π‘Ž = 1.4 and 𝑏 = 0.3.Similar simulation is conducted forHenon series with the

corresponding results shown in Figure 6 for the comparisonof convergence rates in 5, 20, and 50 learning cycles, inFigure 7 for the comparison of prediction performance, andin Figure 8 for the prediction errors. RMSEs and the numbersof hidden nodes for SCFNN and SCRFNN are also listed inTables 1 and 2, respectively.

Page 7: Research Article A New Approach for Chaotic Time Series ...downloads.hindawi.com/journals/mpe/2016/3542898.pdfof developing a dynamic model by the observed time series for a nonlinear

Mathematical Problems in Engineering 7

0 200 400 600 800 1000βˆ’0.2

βˆ’0.1

0

0.1

0.2

Chaotic time series 1001–2000

Pred

ictio

n er

ror

(a)

0 200 400 600 800 1000βˆ’0.2

βˆ’0.1

0

0.1

0.2

Chaotic time series 1001–2000

Pred

ictio

n er

ror

(b)

Figure 5: The prediction errors for well-trained SCFNN (a) and SCRFNN (b).

1 1.5 2 2.5 3 3.5 4 4.5 50

0.02

0.04

5 learning cycles

Erro

r

SCRFNNSCFNN

0 5 10 15 200

0.02

0.04

20 learning cycles

Erro

r

SCRFNNSCFNN

0 10 20 30 40 500

0.02

0.04

50 learning cycles

Erro

r

SCRFNNSCFNN

Figure 6: Comparison of convergence for 5, 20, and 50 learningcycles.

5. Conclusions

A novel type of network architecture called SCRFNN hasbeen proposed in this paper for chaotic time series prediction.It inherits the practically implementable algorithms, the self-constructing ability, and the fuzzy logic rule from the existing

0 10 20 30 40 50βˆ’1.5

βˆ’1

βˆ’0.5

0

0.5

1

1.5

HenonSCFNN

0 10 20 30 40 50βˆ’1.5

βˆ’1

βˆ’0.5

0

0.5

1

1.5

HenonSCRFNN

Chaotic series 1751–1800

Chaotic time series 1251–1300

Figure 7: Henon series and the predictions with SCFNN andSCRFNN.

SCFNN. Also, it brings new recurrent path in each nodeof the hidden layer of SCFNN. Two numerical studies havedemonstrated that SCRFNN has superior performance inconvergence rate and prediction accuracy than the existingSCFNN, at the cost of slightly heavier structure (number ofhidden nodes) and extra rules for recurrent path. Hardwareimplementation of the proposed SCRFNN will be interestingfor future research.

Competing Interests

The authors declare that they have no competing interests.

Page 8: Research Article A New Approach for Chaotic Time Series ...downloads.hindawi.com/journals/mpe/2016/3542898.pdfof developing a dynamic model by the observed time series for a nonlinear

8 Mathematical Problems in Engineering

1000 1200 1400 1600 1800 2000βˆ’0.2

βˆ’0.1

0

0.1

0.2

Pred

ictio

n er

ror

Chaotic time series 1001–2000(a)

1000 1200 1400 1600 1800 2000βˆ’0.2

βˆ’0.1

0

0.1

0.2

Pred

ictio

n er

ror

Chaotic time series 1001–2000(b)

Figure 8: The prediction errors for well-trained SCFNN (a) and SCRFNN (b).

References

[1] M. Casdagli, β€œNonlinear prediction of chaotic time series,”Physica D: Nonlinear Phenomena, vol. 35, no. 3, pp. 335–356,1989.

[2] J. Zhang, H. S.-H. Chung, and W.-L. Lo, β€œChaotic time seriesprediction using a neuro-fuzzy system with time-delay coordi-nates,” IEEE Transactions on Knowledge and Data Engineering,vol. 20, no. 7, pp. 956–964, 2008.

[3] Z. Chen, C. Lu, W. Zhang, and X. Du, β€œA chaotic time seriesprediction method based on fuzzy neural network and itsapplication,” in Proceedings of the 3rd International Workshopon Chaos-Fractals Theories and Applications (IWCFTA ’10), pp.355–359, October 2010.

[4] K. Wu and T.Wang, β€œPrediction of chaotic time series based onRBF neural network optimization,” Computer Engineering, vol.39, pp. 208–216, 2013.

[5] J. Xi and H. Wang, β€œModeling and prediction of multivariateseries based on an improved RBF network,” Information andControl, vol. 41, no. 2, pp. 159–164, 2012.

[6] Q.-L. Ma, Q.-L. Zheng, H. Peng, T.-W. Zhong, and L.-Q. Xu,β€œChaotic time series prediction based on evolving recurrentneural networks,” in Proceedings of the 6th International Con-ference on Machine Learning and Cybernetics (ICMLC ’07), vol.6, pp. 3496–3500, IEEE, Hong Kong, August 2007.

[7] R. Chandra and M. Zhang, β€œCooperative coevolution of Elmanrecurrent neural networks for chaotic time series prediction,”Neurocomputing, vol. 86, pp. 116–123, 2012.

[8] Y. Zuo and T. Wang, β€œPrediction for chaotic time series ofBP neural network based on differential evolution,” ComputerEngineering and Applications, vol. 49, pp. 160–163, 2013.

[9] H.-T. Zhang and Z. Chen, β€œConsensus acceleration in a classof predictive networks,” IEEE Transactions on Neural Networksand Learning Systems, vol. 25, no. 10, pp. 1921–1927, 2014.

[10] Z. Peng, D. Wang, Z. Chen, X. Hu, and W. Lan, β€œAdaptivedynamic surface control for formations of autonomous surfacevehicles with uncertain dynamics,” IEEE Transactions on Con-trol Systems Technology, vol. 21, no. 2, pp. 513–520, 2013.

[11] Z. Chen and T. Iwasaki, β€œCirculant synthesis of central patterngenerators with application to control of rectifier systems,” IEEETransactions on Automatic Control, vol. 53, no. 1, pp. 273–286,2008.

[12] L. Zhu, Z. Chen, and T. Iwasaki, β€œOscillation, orientation, andlocomotion of underactuated multilink mechanical systems,”IEEE Transactions on Control Systems Technology, vol. 21, no. 5,pp. 1537–1548, 2013.

[13] F.-J. Lin, C.-H. Lin, and P.-H. Shen, β€œSelf-constructing fuzzyneural network speed controller for permanent-magnet syn-chronousmotor drive,” IEEETransactions on Fuzzy Systems, vol.9, no. 5, pp. 751–759, 2001.

[14] W.-D. Weng, R.-C. Lin, and C.-T. A. Hsueh, β€œThe designof an SCFNN based nonlinear channel equalizer,” Journal ofInformation Science and Engineering, vol. 21, no. 4, pp. 695–709,2005.

[15] Y. Li, Q. Peng, and B. Hu, β€œRemote controller design of net-worked control systems based on self-constructing fuzzy neuralnetwork,” in Proceedings of the 2nd International Symposium onNeural Networks: Advances in Neural Networks (ISNN ’05), pp.143–149, Chongqing, China, June 2005.

[16] Z. Chen and T. Iwasaki, β€œMatrix perturbation analysis forweakly coupled oscillators,” Systems and Control Letters, vol. 58,no. 2, pp. 148–154, 2009.

[17] Z. Chen, T. Iwasaki, and L. Zhu, β€œNeural control for coordinatednatural oscillation patterns,” Systems andControl Letters, vol. 62,no. 8, pp. 693–698, 2013.

[18] A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, and K. J. Lang,β€œPhoneme recognition using time-delay neural networks,” IEEETransactions on Acoustics, Speech, and Signal Processing, vol. 37,no. 3, pp. 328–339, 1989.

[19] G. Kechriotis, E. Zervas, and E. Manolakos, β€œUsing recurrentneural networks for blind equalization of linear and nonlin-ear communications channels,” in Proceedings of the MilitaryCommunications Conference (MILCOM ’92) Conference Record.Communications-Fusing Command, Control and Intelligence,pp. 784–788, IEEE, 1992.

[20] R.-C. Lin, W.-D. Weng, and C.-T. Hsueh, β€œDesign of anSCRFNN-based nonlinear channel equaliser,” IEE Proceed-ingsβ€”Communications, vol. 152, no. 6, p. 771, 2005.

[21] G. Kechriotis and E. S. Manolakos, β€œUsing recurrent neuralnetwork for nonlinear and chaotic signal processing,” in Pro-ceedings of the International Conference on Acoustics, Speech,and Signal Processing (ICASSP ’93), pp. 465–469, Minneapolis,Minn, USA, April 1993.

[22] H. Kantz and T. Schreiber,Nonlinear Time Series Analysis, Cam-bridge University Press, New York, NY, USA, 1997.

[23] A. Basharat and M. Shah, β€œTime series prediction by chaoticmodeling of nonlinear dynamical systems,” in Proceedings ofthe IEEE 12th International Conference on Computer Vision, pp.1941–1948, September 2009.

[24] F.-J. Lin, R.-F. Fung, and R.-J. Wai, β€œComparison of sliding-mode and fuzzy neural network control for motor-toggleservomechanism,” IEEE/ASME Transactions on Mechatronics,vol. 3, no. 4, pp. 302–318, 1998.

Page 9: Research Article A New Approach for Chaotic Time Series ...downloads.hindawi.com/journals/mpe/2016/3542898.pdfof developing a dynamic model by the observed time series for a nonlinear

Mathematical Problems in Engineering 9

[25] F.-J. Lin, W.-J. Hwang, and R.-J. Wai, β€œA supervisory fuzzyneural network control system for tracking periodic inputs,”IEEE Transactions on Fuzzy Systems, vol. 7, no. 1, pp. 41–52, 1999.

[26] C.-T. Lin, β€œA neural fuzzy control system with structure andparameter learning,” Fuzzy Sets and Systems, vol. 70, no. 2-3, pp.183–212, 1995.

[27] F.-J. Lin, R.-J. Wai, and C.-C. Lee, β€œFuzzy neural networkposition controller for ultrasonic motor drive using push-pull DC-DC converter,” IEE Proceedings: Control Theory andApplications, vol. 146, no. 1, pp. 99–107, 1999.

[28] G. Kechriotis, E. Zervas, and E. S. Manolakos, β€œUsing recurrentneural networks for adaptive communication channel equaliza-tion,” IEEE Transactions on Neural Networks, vol. 5, no. 2, pp.267–278, 1994.

[29] R. J. Williams and D. Zipser, β€œA learning algorithm for contin-ually running fully recurrent neural networks,” Neural Compu-tation, vol. 1, no. 2, pp. 270–280, 1989.

[30] M. Henon, β€œA two-dimensional mapping with a strange attrac-tor,” Communications in Mathematical Physics, vol. 50, no. 1, pp.69–77, 1976.

Page 10: Research Article A New Approach for Chaotic Time Series ...downloads.hindawi.com/journals/mpe/2016/3542898.pdfof developing a dynamic model by the observed time series for a nonlinear

Submit your manuscripts athttp://www.hindawi.com

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttp://www.hindawi.com

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Journal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

CombinatoricsHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

International Journal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

The Scientific World JournalHindawi Publishing Corporation http://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com

Volume 2014 Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Stochastic AnalysisInternational Journal of