Research Article A New Approach for Chaotic Time Series...
Transcript of Research Article A New Approach for Chaotic Time Series...
Research ArticleA New Approach for Chaotic Time Series Prediction UsingRecurrent Neural Network
Qinghai Li and Rui-Chang Lin
Department of Electronic Engineering, Zhejiang Industry and Trade Vocational College, East Road 717, Wenzhou 325003, China
Correspondence should be addressed to Rui-Chang Lin; [email protected]
Received 4 July 2016; Accepted 31 October 2016
Academic Editor: Lijun Zhu
Copyright Β© 2016 Q. Li and R.-C. Lin. This is an open access article distributed under the Creative Commons Attribution License,which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
A self-constructing fuzzy neural network (SCFNN) has been successfully used for chaotic time series prediction in the literature.In this paper, we propose the strategy of adding a recurrent path in each node of the hidden layer of SCFNN, resulting in a self-constructing recurrent fuzzy neural network (SCRFNN). This novel network does not increase complexity in fuzzy inference orlearning process. Specifically, the structure learning is based on partition of the input space, and the parameter learning is basedon the supervised gradient descent method using a delta adaptation law. This novel network can also be applied for chaotic timeseries prediction including Logistic andHenon time series.More significantly, it features rapider convergence and higher predictionaccuracy.
1. Introduction
A chaotic time series can be expressed as a deterministicdynamical system that however is usually unknown orincompletely understood.Therefore, it becomes important tomake prediction from the experimental observation of a realsystem. The technology was widely studied in many scienceand engineering fields such as mathematical finance, weatherforecasting, and intelligent transport and trajectory forecast-ing. The early research on nonlinear prediction of chaotictime series can be found in [1]. Most recent work mainlyfocuses on different methods for improving the predictionperformance, such as fuzzy neural networks [2, 3], RBF [4, 5],recurrent neural networks [6, 7], back-propagation recurrentneural networks [8], predictive consensus networks [9, 10],and biologically inspired neuronal network [11, 12].
Both fuzzy logic and artificial neural networks are poten-tially suitable for nonlinear chaotic series prediction as theycan perform complex mappings between their input and out-put spaces. In particular, the self-constructing fuzzy neuralnetwork (SCFNN) [13] is capable of constructing a simplenetwork without the need of knowledge to the chaotic series.This capability is due to SCFNNβs ability of self-adjusting thelocation of the input space fuzzy partition, so there is no need
to estimate in advance the series states distribution. More-over, carefully setting conditions on the increasing demand offuzzy rules makes the architecture of the constructed SCFNNfairly simple.These advantagesmotivated researchers to buildvarious chaotic series prediction algorithms by employing anSCFNN structure in, for example, [14, 15].
Neural network has wide applications in various areas;see, for example, [16β21]. In particular, recurrent neuralnetwork (RNN) has been proved successful in speech pro-cessing and adaptive channel equalization. One of the mostimportant features of RNN is its feedback paths in the circuitthat makes it have a sequential rather than a combinationalbehavior. RNNs were applied not only in the processing oftime-varying patterns or data sequences but also in dealingwith the dissonance of input pattern when the possibly dif-ferent outputs are generated by the same set of input patternsdue to the feedback paths. Since RNN is a highly nonlineardynamical system that exhibits rich and complex behaviors,it is expected that RNN possesses better performance thantraditional signal processing techniques in modeling andpredicting chaotic time series. Some works on improving theperformance of chaotic series prediction using RNNs can befound in, for example, [6, 7].
Hindawi Publishing CorporationMathematical Problems in EngineeringVolume 2016, Article ID 3542898, 9 pageshttp://dx.doi.org/10.1155/2016/3542898
2 Mathematical Problems in Engineering
From the above observation, it is a novel idea to com-bine the SCFNN and RNN techniques for chaotic seriesprediction, which results in a new architecture called self-constructing recurrent fuzzy neural network (SCRFNN) inthis paper. The structure learning and parameter learningalgorithms in SCRFNN are inherited from those in SCFNN,which maintains the simplicity in implementation. Nev-ertheless, the recurrent path in SCRFNN makes it morecomplex in function deviation and exhibit richer behaviors.Extensive numerical simulation shows that both SCFNNand SCRFNN are effective in predicting chaotic time seriesincluding Logistic series and Henon series. But the latterhas superior performance in convergence rate and predictionaccuracy at the cost of slightly heavier structure (number ofhidden nodes) and fuzzy logic rules.
It is noted that a similar neural network structure has beenused for nonlinear channel equalizers in [20]. The purposeof an equalizer in, for example, wireless communicationsystems, is to recover the transmitted sequence or its delayedversion using a trained neural network. But the chaotic timeseries prediction studied in this paper has a completely differ-entmechanism. Chaotic time series prediction is the problemof developing a dynamic model by the observed time seriesfor a nonlinear chaotic system that exhibits deterministicbehavior with a known initial condition. A neural networkis used to build chaotic time series predication after theso-called phase space reconstruction that is not touched inchannel equalizers.
The rest of this paper is organized as follows. In Section 2,the problem of chaotic series prediction is formulated. Thestructure, the inference output, and the learning algorithmsof SCRFNN are described in Section 3. Numerical simulationresults on two classes of Logistic and Henon chaotic seriespredictions are presented in Section 4. Finally, the paper isconcluded in Section 5.
2. Background and Preliminaries
The phase space reconstruction theory is commonly used forchaotic time series prediction; see, for example, [22, 23]. Themain idea is to find away for phase space reconstruction fromtime series and then conduct prediction in phase space. Thetheory is briefly revisited in this section followed by a generalneural network prediction model.
Delay embedding is the main technique for phase spacereconstruction. Assume a chaotic time series is representedas π₯1, π₯2, . . . , π₯π and the phase space vector
ππ = (π₯π, π₯π+π, . . . , π₯π+(πβ1)π)π , π = 1, 2, . . . ,π. (1)
The parameter π is the delay, π the embedding dimension,and π = π β (π β 1)π the number of phase vectors. A pre-diction model contains an attractor that warps the observeddata, in the phase space, and provides precise informationabout the dynamics involved. Therefore, it can be used topredict ππ+1 from ππ in phase space and hence synthesizeπ₯π+1 using time series reconstruction. The procedure can besummarized as a model with input ππ and output π₯π+1.
Takensβ theorem provides the conditions under which asmooth attractor can be reconstructed from the observationsmade with a generic function. The theorem states that, if theembedding dimension π β₯ 2π + 1, where π is the dimensionof the system dynamics, then the phase space constituted bythe original system state variable and the dynamic behaviorsin the one-dimensional sequence of observations are equiva-lent. It is equivalent because the chaotic attractor differentialin the two spaces is homeomorphism. The reconstructedsystem that includes the evolution information of all statevariables is capable of calculating the future state of the systembased on its current state, which provides a basic mechanismfor chaotic time series prediction.
As explained before, the prediction model with input ππand output π₯π+1 provides precise information of the nonlineardynamics under consideration. Neural network (NN) is anappropriate structure to build a nonlinear model of chaotictime series predication. Specifically, a typical three-layer NNis discussed below.
When we apply a three-layer NN to predict a chaotictime series, better prediction performance can be achievedif the number of neurons of the input layer is equal to theembedding dimension of the phase space reconstructed bythe chaotic time series. Specifically, let the number of neuronsof the input layer be π, that of the hidden layer be π, andthat of output layer be 1. Then, the NN describes a mappingπ : Rπ β R1.
The input for the nodes of the hidden layer is
ππ = πβπ=1
ππππ₯π β ππ, π = 1, 2, . . . , π, (2)
whereπππ is the link weight from the input layer to the hiddenlayer and ππ is the threshold. Assume the Sigmoid transferfunctionπ(π ) = 1/(1+πβπ ) is used for theNN.Then the outputof the hidden layer nodes is
ππ = 11 + exp (ββππ=1 ππππ₯π + ππ) , π = 1, 2, . . . , π. (3)
Similarly, the input and the output of the output layer nodesare
πΏ = πβπ=1
ππππ β πΎ,π₯π+1 = 11 + exp (ββππ=1 ππππ + πΎ) , (4)
respectively. Here ππ is the link weight from the hidden layerto the output layer and πΎ is the threshold.
In general, the aforementioned link weights (πππ, ππ) andthe thresholds (ππ, πΎ) of the NN can be randomly initialized,for example, within [0, 1]. Then, they can be properly trainedsuch that the resulting NN has the capability of preciseprediction. The specific training strategy depends on thespecific architecture of the NN, which is the main scope ofthis research line and will be elaborated in the next section.
Mathematical Problems in Engineering 3
x
π12
π24
π22
π23
π11 π31
π21
π32
π14
π13
π34
π33
S2
S1 S3
S4
y3
y1y2
Figure 1: Structure of a recurrent neural network.
3. Self-Constructing Recurrent FuzzyNeural Network
Following the general description of the NN predictionmodel introduced in the previous section, we aim to pro-pose a specific architecture with the features of fuzzy logic,recurrent path, and self-constructing ability, called a self-constructing recurrent fuzzy neural network (SCRFNN).With these features, the SCRFNN is able to exhibit rapidconvergence and high prediction accuracy.
3.1. Background of SCRFNN. The network structure of atraditional fuzzy neural network (FNN) is determined inadvance. During the learning process, the structure is fixedand a supervised back-propagation algorithm is appliedto adjust the membership function parameters and theweighting coefficients. Such an FNN with a fixed structureusually needs a large number of hidden layer nodes foracceptable performance, which significantly increases thesystem complexity [24, 25].
To overcome the aforementioned drawback of a fixedstructure, the SCRFNN used in this paper exploits a two-phase learning algorithm that includes both structure learn-ing andparameter learning. Specifically, the structure of fuzzyrules is determined in the first phase and the coefficientsof each rule are tuned in the second one. It is conceptuallyeasy to sequentially carry out the two phases. However, itis suitable only for offline operations with a large amountof representative data collected in advance [26]. Moreover,independent realization of structure and parameter learningis time-consuming. These disadvantages can be eliminatedin SCRFNN as the two phases of structure learning andparameter learning are conducted concurrently [27].
The main feature of the proposed SCRFNN is the novelrecurrent path in the circuit. The schematic diagram of arecurrent neural network (RNN) is shown in Figure 1 [28],where, for example, π₯ and π¦π, π = 1, 2, 3, denote the external
Layer 3
Layer 4
x1 x2
Layer 1
Layer 2
O(1)1 (t) O(1)
2 (t)
O4(t)
O(3)1 (t)
β
β ββ
π1
π2 πM
Β· Β· Β·
O(3)2 (t) O(3)
M (t)
O(2)21 (t)O(2)
12 (t)O(2)11 (t) O(2)
22 (t) O(2)M2(t)O(2)
M1(t)
zβ1 zβ1 zβ1 zβ1 zβ1 zβ1Β· Β· Β·
πR11 πR12πR21
πR22πRM1
πRM2
Figure 2: Schematic diagram of SCRFNN.
input and the unit outputs, respectively.The dynamics of sucha structure can be described by, for π = 1, 2, 3,
π π [π‘ + 1] = πβπ=1
πππ [π‘] π¦π [π‘] + πβπ=1
ππ,π+π [π‘] π₯netπ [π‘] ,
π¦π [π‘ + 1] = π (π π [π‘ + 1]) , (5)
where π and π denote the numbers of external inputs andhidden layer units, respectively, πππ[π‘] is the connectionweight from the πth unit to the πth unit at the time instant π‘,and the activation function π(β ) can be any real differentiablefunction. Clearly, the output of each unit depends on theprevious external inputs to the network aswell as the previousoutputs of all units. The training algorithms for the recurrentparameters of RNNs have been well studied, for example, thereal-time recurrent learning (RTRL) algorithm [29].
3.2. Structure and Inference Output of SCRFNN. The sche-matic diagram of SCRFNN is shown in Figure 2. The fuzzylogic rule and the functions in each layer are briefly describedas follows.
The fuzzy logic rule adopted in the SCRFNN has thefollowing form:
π π: If (π₯1 + ππ π1π(2)π1 (π‘ β 1)) is π΄π1,and (π₯2 + ππ π2π(2)π2 (π‘ β 1)) is π΄π2, . . . ,and (π₯π + ππ πππ(2)ππ (π‘ β 1)) is π΄ππ.Then π¦ (π‘) = ππ
(6)
with π input variables and a constant consequence ππ. In(6), π΄ππ is the linguistic term of the precondition part with
4 Mathematical Problems in Engineering
membership function ππ΄ππ , π₯πβs and π¦ denote the inputand output variables, respectively, and ππ ππ and π(2)ππ (π‘ β 1)represent the recurrent coefficient and the last state outputof the πth term associated with the πth input variable.
The nodes in Layer 1, called the input nodes, simplypass the input signals to the next layer. There are two inputvariables π₯1 and π₯2 and two corresponding outputs π(1)1 (π‘) =π₯1 and π(1)2 (π‘) = π₯2 for the problem considered in this paper.
Each node in Layer 2 acts as a linguistic label for the inputvariables from Layer 1. Let π(2)ππ (π‘) be the output of the πthrule associated with the πth input node in status π‘ and ππ ππ βsthe recurrent coefficients. Then, the Gaussian membershipfunction is determined in this layer as follows:
π(2)ππ (π‘)= exp(β(π(1)π (π‘) + ππ πππ(2)ππ (π‘ β 1) β πππ)2π2ππ ) (7)
with the mean πππ and the standard deviation πππ.Each node in Layer 3, represented by the product symbolΞ , works as the precondition part of the fuzzy logic rule.
Specifically, the output of the πth rule node in status π‘ is thusexpressed as
π(3)π (π‘) = βπ
π(2)ππ (π‘) . (8)
The single node in Layer 4, called the output node, acts asa defuzzifier. As a result, the final output of the SCRFNN is
π(4) (π‘) = βπ
πππ(3)π (π‘) , (9)
where the link weight ππ is the output action strengthassociated with the πth rule.
3.3. Online Learning Algorithm for SCRFNN. As discussedin Section 3.1, a two-phase learning algorithm for structurelearning and parameter learning is used for SCRFNN. Theinitial SCRFNN is composed of the input and output nodesonly. The membership and the rule nodes are dynamicallygenerated and adjusted according to the online data byperforming the structure and parameter learning processes.These two phases are explained below.
The structure learning algorithm aims to find the properinput space fuzzy partitions and fuzzy logic rules withminimal fuzzy sets and rules. As the initial SCRFNN containsno membership or rule node, the main work in structurelearning is to decide whether it is necessary to add a newmembership function node in Layer 2 and the associatedfuzzy logic rule in Layer 3. The criterion for generating anew fuzzy rule for new incoming data is based on the firingstrengths π(3)π (π‘) = βππ(2)ππ (π‘), π = 1, . . . ,π, where π is thenumber of existing rules.
Specifically, if the maximum degree πmax =max1β€πβ€ππ(3)π (π‘) is not larger than a prespecified thresholdparameter πmin, a new membership function needs to begenerated. Also, the mean value and the standard deviationof the new membership function are, respectively, assignedas πnewπ = π₯π and πnew
π = ππ, where π₯π is the new incomingdata and ππ is an empirical prespecified constant. The valueπmin is initially chosen between 0 and 1 and then keepsdecaying in order to limit the growing size of the SCRFNNstructure.
The parameter learning algorithm aims to minimize apredefined energy function by adaptively adjusting the vectorof network parameters based on a given set of input-outputpairs. The particular energy function used in the SCRFNN isas follows:
πΈ = 12 (ππ β π(4) (π‘))2 , (10)
where ππ is the desired output associated with the inputpattern and π(4)(π‘) is the inferred output. The vector isadjusted along the negative gradient of the energy functionwith respect to the vector. In the four-layer SCRFNN, a back-propagation learning rule is adopted as the gradient vectoris calculated in the direction opposite to the data flow, asdescribed below.
First, the link weight ππ for the output node in Layer 4 isupdated along the negative gradient of the energy function;that is,
Ξππ = ππΈπππ = β (ππ β π(4) (π‘))π(3)π (π‘) . (11)
In particular, it is updated according to
ππ (π + 1) = ππ (π) β ππΞππ, (12)
where π is the pattern number of the πth link and the factorππ is the learning-rate parameter.Next, in Layer 3, the meanπππ and the standard deviationπππ of the membership functions are updated by
πππ (π + 1) = πππ (π) β ππΞπππ,πππ (π + 1) = πππ (π) β ππΞπππ, (13)
Mathematical Problems in Engineering 5
with the learning-rate parameters ππ and ππ. The terms Ξπππand Ξπππ are also calculated as the gradient of the energyfunction as follows:
Ξπππ = ππΈππππ = β (ππ β π(4) (π‘)) πππ(3)π (π‘)β 2 (π(1)π (π‘) + ππ πππ(2)ππ (π‘ β 1) β πππ)π2ππ ,
Ξπππ = ππΈππππ = β (ππ β π(4) (π‘)) πππ(3)π (π‘)β 2 (π(1)π (π‘) + ππ πππ(2)ππ (π‘ β 1) β πππ)2π3ππ .
(14)
Finally, the variation of recurrent coefficient Ξππ ππ inLayer 2 is updated by
ππ ππ (π + 1) = ππ ππ (π) β ππ Ξππ ππ , (15)
where
Ξππ ππ = ππΈπππ ππ = (ππ β π(4) (π‘)) ππ(βπ,π =π
π(2)ππ (π‘))β 2 (π(1)π (π‘) + ππ πππ(2)ππ (π‘ β 1) β πππ)2π3ππ π(2)ππ (π‘ β 1)
(16)
is again the gradient of the energy function.
4. Simulation Results
In order to evaluate the effectiveness of the proposedSCRFNN, we apply it on two benchmark chaotic time seriesdata sets: Logistic series and Henon series. The number ofdata used for each benchmark problem is 2000. In particular,we use the first 1000 data for training and the remaining1000 for validation. It will be shown that both SCFNN andSCRFNNare effective in predicting Logistic series andHenonseries. But the latter has superior performance in convergencerate and prediction accuracy at the cost of slightly heavierstructure (number of hidden nodes) and rules.
4.1. Logistic Chaotic Series. A Logistic chaotic series is gener-ated by the following equation:
π₯π+1 = ππ₯π (1 β π₯π) , π₯π β [0, 1] , π β [0, 4] . (17)
In particular, the parameter π is restricted within the range of3.57 < π β€ 4 for chaotic behavior.The training performance for both SCFNN and SCRFNN
is demonstrated in Figure 3, where the profiles with 5, 20, and50 learning cycles are compared in three graphs, respectively.It is observed that the proposed learning algorithm is effectivein terms of fast convergence. In all the three cases, the
1 2 3 4 50
0.02
0.04
0.06
0.08
5 learning cycles
Erro
r
SCRFNNSCFNN
0 5 10 15 200
0.02
0.04
0.06
0.08
20 learning cycles
Erro
rSCRFNNSCFNN
0 10 20 30 40 500
0.02
0.04
0.06
0.08
50 learning cycles
Erro
r
SCRFNNSCFNN
Figure 3: Comparison of convergence for 5, 20, and 50 learningcycles.
convergence rate for the SCRFNN is faster than that for theSCFNN.
Root mean squared error (RMSE) is the main errormetrics for describing the training errors. It is explicitlydefined as follows:
RMSE = ( 1π β 1 πβπ=1
[ππ β ππ]2)1/2 , (18)
where π is the number of trained patterns andππ andππ are theFNN derived values and the actual chaotic time series data,respectively. Table 1 shows that the RMSEs of SCFNN andSCRFNN completed 5, 20, and 50 learning cycles. In all thethree cases, SCRFNN results in smaller RMSEs than SCFNN.
For the well-trained SCFNN and SCRFNN, the predic-tion performance is compared and shown in Figures 4 and5. It is observed in Figure 4 that the predicted outputs fromthe SCFNN and the SCRFNN well match the real data. More
6 Mathematical Problems in Engineering
0 10 20 30 40 50
0.4
0.5
0.6
0.7
0.8
0.9
1
LogisticSCFNN
0 10 20 30 40 50
0.4
0.5
0.6
0.7
0.8
0.9
1
LogisticSCRFNN
Chaotic time series 1251β1300
Chaotic time series 1751β1800
Figure 4: Logistic series and the predictions with SCFNN and SCRFNN.
Table 1: RMSE for SCFNN and SCRFNN completed 5, 20, and 50cycles.
FNN (learning cycles) Logistic HenonSCFNN (5) 7.65499π β 004 0.0015SCRFNN (5) 4.51469π β 004 0.0014SCFNN (20) 5.0209π β 004 0.0011SCRFNN (20) 4.2344π β 004 6.3094π β 004SCFNN (50) 4.7764π β 004 7.8083π β 004SCRFNN (50) 4.189π β 004 4.4918π β 004specifically, the prediction errors are less significant in theSCRFNN than those in the SCFNN, as shown in Figure 5.
Finally, Table 2 shows that the numbers of hidden nodesin SCFNN and SCRFNN completed 5, 20, and 50 learningcycles. The SCRFNN has slightly more hidden nodes in allthe three cases. This heavier structure of SCRFNN, togetherwith the extra rules for recurrent path, is the cost for theaforementioned performance improvement.
4.2. Henon Chaotic Series. Henon mapping was proposed bythe French astronomer Michel Henon for studying globularclusters [30]. It is one of the most famous simple dynamicalsystems with wide applications. In recent years, Henon
Table 2: Number of hidden nodes in SCFNN and SCRFNNcompleted 5, 20, and 50 cycles.
FNN (learning cycles) Logistic HenonSCFNN (5) 4 7SCRFNN (5) 5 7SCFNN (20) 4 7SCRFNN (20) 6 8SCFNN (50) 4 7SCRFNN (50) 5 10
mapping has been well studied in chaos theory. Its dynamicsare given as follows:
π₯ (π + 1) = 1 + π¦π β ππ₯2 (π) ,π¦ (π + 1) = ππ₯ (π) . (19)
For example, chaos is produced with π = 1.4 and π = 0.3.Similar simulation is conducted forHenon series with the
corresponding results shown in Figure 6 for the comparisonof convergence rates in 5, 20, and 50 learning cycles, inFigure 7 for the comparison of prediction performance, andin Figure 8 for the prediction errors. RMSEs and the numbersof hidden nodes for SCFNN and SCRFNN are also listed inTables 1 and 2, respectively.
Mathematical Problems in Engineering 7
0 200 400 600 800 1000β0.2
β0.1
0
0.1
0.2
Chaotic time series 1001β2000
Pred
ictio
n er
ror
(a)
0 200 400 600 800 1000β0.2
β0.1
0
0.1
0.2
Chaotic time series 1001β2000
Pred
ictio
n er
ror
(b)
Figure 5: The prediction errors for well-trained SCFNN (a) and SCRFNN (b).
1 1.5 2 2.5 3 3.5 4 4.5 50
0.02
0.04
5 learning cycles
Erro
r
SCRFNNSCFNN
0 5 10 15 200
0.02
0.04
20 learning cycles
Erro
r
SCRFNNSCFNN
0 10 20 30 40 500
0.02
0.04
50 learning cycles
Erro
r
SCRFNNSCFNN
Figure 6: Comparison of convergence for 5, 20, and 50 learningcycles.
5. Conclusions
A novel type of network architecture called SCRFNN hasbeen proposed in this paper for chaotic time series prediction.It inherits the practically implementable algorithms, the self-constructing ability, and the fuzzy logic rule from the existing
0 10 20 30 40 50β1.5
β1
β0.5
0
0.5
1
1.5
HenonSCFNN
0 10 20 30 40 50β1.5
β1
β0.5
0
0.5
1
1.5
HenonSCRFNN
Chaotic series 1751β1800
Chaotic time series 1251β1300
Figure 7: Henon series and the predictions with SCFNN andSCRFNN.
SCFNN. Also, it brings new recurrent path in each nodeof the hidden layer of SCFNN. Two numerical studies havedemonstrated that SCRFNN has superior performance inconvergence rate and prediction accuracy than the existingSCFNN, at the cost of slightly heavier structure (number ofhidden nodes) and extra rules for recurrent path. Hardwareimplementation of the proposed SCRFNN will be interestingfor future research.
Competing Interests
The authors declare that they have no competing interests.
8 Mathematical Problems in Engineering
1000 1200 1400 1600 1800 2000β0.2
β0.1
0
0.1
0.2
Pred
ictio
n er
ror
Chaotic time series 1001β2000(a)
1000 1200 1400 1600 1800 2000β0.2
β0.1
0
0.1
0.2
Pred
ictio
n er
ror
Chaotic time series 1001β2000(b)
Figure 8: The prediction errors for well-trained SCFNN (a) and SCRFNN (b).
References
[1] M. Casdagli, βNonlinear prediction of chaotic time series,βPhysica D: Nonlinear Phenomena, vol. 35, no. 3, pp. 335β356,1989.
[2] J. Zhang, H. S.-H. Chung, and W.-L. Lo, βChaotic time seriesprediction using a neuro-fuzzy system with time-delay coordi-nates,β IEEE Transactions on Knowledge and Data Engineering,vol. 20, no. 7, pp. 956β964, 2008.
[3] Z. Chen, C. Lu, W. Zhang, and X. Du, βA chaotic time seriesprediction method based on fuzzy neural network and itsapplication,β in Proceedings of the 3rd International Workshopon Chaos-Fractals Theories and Applications (IWCFTA β10), pp.355β359, October 2010.
[4] K. Wu and T.Wang, βPrediction of chaotic time series based onRBF neural network optimization,β Computer Engineering, vol.39, pp. 208β216, 2013.
[5] J. Xi and H. Wang, βModeling and prediction of multivariateseries based on an improved RBF network,β Information andControl, vol. 41, no. 2, pp. 159β164, 2012.
[6] Q.-L. Ma, Q.-L. Zheng, H. Peng, T.-W. Zhong, and L.-Q. Xu,βChaotic time series prediction based on evolving recurrentneural networks,β in Proceedings of the 6th International Con-ference on Machine Learning and Cybernetics (ICMLC β07), vol.6, pp. 3496β3500, IEEE, Hong Kong, August 2007.
[7] R. Chandra and M. Zhang, βCooperative coevolution of Elmanrecurrent neural networks for chaotic time series prediction,βNeurocomputing, vol. 86, pp. 116β123, 2012.
[8] Y. Zuo and T. Wang, βPrediction for chaotic time series ofBP neural network based on differential evolution,β ComputerEngineering and Applications, vol. 49, pp. 160β163, 2013.
[9] H.-T. Zhang and Z. Chen, βConsensus acceleration in a classof predictive networks,β IEEE Transactions on Neural Networksand Learning Systems, vol. 25, no. 10, pp. 1921β1927, 2014.
[10] Z. Peng, D. Wang, Z. Chen, X. Hu, and W. Lan, βAdaptivedynamic surface control for formations of autonomous surfacevehicles with uncertain dynamics,β IEEE Transactions on Con-trol Systems Technology, vol. 21, no. 2, pp. 513β520, 2013.
[11] Z. Chen and T. Iwasaki, βCirculant synthesis of central patterngenerators with application to control of rectifier systems,β IEEETransactions on Automatic Control, vol. 53, no. 1, pp. 273β286,2008.
[12] L. Zhu, Z. Chen, and T. Iwasaki, βOscillation, orientation, andlocomotion of underactuated multilink mechanical systems,βIEEE Transactions on Control Systems Technology, vol. 21, no. 5,pp. 1537β1548, 2013.
[13] F.-J. Lin, C.-H. Lin, and P.-H. Shen, βSelf-constructing fuzzyneural network speed controller for permanent-magnet syn-chronousmotor drive,β IEEETransactions on Fuzzy Systems, vol.9, no. 5, pp. 751β759, 2001.
[14] W.-D. Weng, R.-C. Lin, and C.-T. A. Hsueh, βThe designof an SCFNN based nonlinear channel equalizer,β Journal ofInformation Science and Engineering, vol. 21, no. 4, pp. 695β709,2005.
[15] Y. Li, Q. Peng, and B. Hu, βRemote controller design of net-worked control systems based on self-constructing fuzzy neuralnetwork,β in Proceedings of the 2nd International Symposium onNeural Networks: Advances in Neural Networks (ISNN β05), pp.143β149, Chongqing, China, June 2005.
[16] Z. Chen and T. Iwasaki, βMatrix perturbation analysis forweakly coupled oscillators,β Systems and Control Letters, vol. 58,no. 2, pp. 148β154, 2009.
[17] Z. Chen, T. Iwasaki, and L. Zhu, βNeural control for coordinatednatural oscillation patterns,β Systems andControl Letters, vol. 62,no. 8, pp. 693β698, 2013.
[18] A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, and K. J. Lang,βPhoneme recognition using time-delay neural networks,β IEEETransactions on Acoustics, Speech, and Signal Processing, vol. 37,no. 3, pp. 328β339, 1989.
[19] G. Kechriotis, E. Zervas, and E. Manolakos, βUsing recurrentneural networks for blind equalization of linear and nonlin-ear communications channels,β in Proceedings of the MilitaryCommunications Conference (MILCOM β92) Conference Record.Communications-Fusing Command, Control and Intelligence,pp. 784β788, IEEE, 1992.
[20] R.-C. Lin, W.-D. Weng, and C.-T. Hsueh, βDesign of anSCRFNN-based nonlinear channel equaliser,β IEE Proceed-ingsβCommunications, vol. 152, no. 6, p. 771, 2005.
[21] G. Kechriotis and E. S. Manolakos, βUsing recurrent neuralnetwork for nonlinear and chaotic signal processing,β in Pro-ceedings of the International Conference on Acoustics, Speech,and Signal Processing (ICASSP β93), pp. 465β469, Minneapolis,Minn, USA, April 1993.
[22] H. Kantz and T. Schreiber,Nonlinear Time Series Analysis, Cam-bridge University Press, New York, NY, USA, 1997.
[23] A. Basharat and M. Shah, βTime series prediction by chaoticmodeling of nonlinear dynamical systems,β in Proceedings ofthe IEEE 12th International Conference on Computer Vision, pp.1941β1948, September 2009.
[24] F.-J. Lin, R.-F. Fung, and R.-J. Wai, βComparison of sliding-mode and fuzzy neural network control for motor-toggleservomechanism,β IEEE/ASME Transactions on Mechatronics,vol. 3, no. 4, pp. 302β318, 1998.
Mathematical Problems in Engineering 9
[25] F.-J. Lin, W.-J. Hwang, and R.-J. Wai, βA supervisory fuzzyneural network control system for tracking periodic inputs,βIEEE Transactions on Fuzzy Systems, vol. 7, no. 1, pp. 41β52, 1999.
[26] C.-T. Lin, βA neural fuzzy control system with structure andparameter learning,β Fuzzy Sets and Systems, vol. 70, no. 2-3, pp.183β212, 1995.
[27] F.-J. Lin, R.-J. Wai, and C.-C. Lee, βFuzzy neural networkposition controller for ultrasonic motor drive using push-pull DC-DC converter,β IEE Proceedings: Control Theory andApplications, vol. 146, no. 1, pp. 99β107, 1999.
[28] G. Kechriotis, E. Zervas, and E. S. Manolakos, βUsing recurrentneural networks for adaptive communication channel equaliza-tion,β IEEE Transactions on Neural Networks, vol. 5, no. 2, pp.267β278, 1994.
[29] R. J. Williams and D. Zipser, βA learning algorithm for contin-ually running fully recurrent neural networks,β Neural Compu-tation, vol. 1, no. 2, pp. 270β280, 1989.
[30] M. Henon, βA two-dimensional mapping with a strange attrac-tor,β Communications in Mathematical Physics, vol. 50, no. 1, pp.69β77, 1976.
Submit your manuscripts athttp://www.hindawi.com
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttp://www.hindawi.com
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Journal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
CombinatoricsHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
International Journal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
The Scientific World JournalHindawi Publishing Corporation http://www.hindawi.com Volume 2014
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com
Volume 2014 Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Stochastic AnalysisInternational Journal of