Real-time Discrete Neural Identifier for a Linear...

6
AbstractThis paper presents a real-time discrete nonlinear neural identifier for a Linear Induction Motor (LIM). This identifier is based on a discrete-time recurrent high order neural network (RHONN) trained on-line with an extended Kalman filter (EKF)-based algorithm. A reduced order observer is used to estimate the secondary fluxes. The real-time implementation of the neural identifier is implemented by using dSPACE DS1104 controller board on MATLAB/Simulink with dSPACE RTI library and its performance is shown by graphs. KeywordsExtended Kalman Filtering, Linear Induction Motor, Recurrent High Order Neural Networks, Real-time Neural Identification. I. INTRODUCTION INEAR induction motors (LIMs) are special electrical machines, in which the electrical energy is converted directly into mechanical energy of translatory motion. Interest on these machines raised in the early 1970 [1]. In the late 1970, the research intensity and number of publications dropped. After 1980, LIMs found their first noticeable applications in, among others, transportation, industry, automation, and home appliances [1], [2]. LIMs have many excellent performance features such as high-starting thrust force, elimination of gears between motor and motion devices, reduction of mechanical loses and the size motion device, high speed operation, silence and so on [2], [3]. The driving principles of the linear induction motor (LIM) are similar to the traditional rotary induction motor (RIM), but its control characteristics are more complicated than the RIM, and the parameters are time varying due to the change of operating conditions, such as speed, temperature, and rail configuration [4]. On the other hand, modern control systems usually require a very structured knowledge about the system to be controlled; such knowledge should be represented in terms of differential or difference equations. This mathematical description of the dynamic system is it known as the model of the system. There can be several motives for establishing mathematical descriptions of dynamic systems, such as: simulation, prediction, fault detection, and control system design among others. Basically, there are two ways to obtain a model; it can be derived in a deductive manner using laws of physics, or it can be inferred from a set of data collected during a practical experiment. The first method can be simple, but in many The authors are with the Centro Universitario de Ciencias Exactas e Ingenierías, Universidad de Guadalajara, Apartado Postal 51-71, Col. Las Águilas, Zapopan, Jalisco, C.P. 45080, México, e-mail [email protected]. cases is excessively time-consuming; it would be unrealistic or impossible to obtain an accurate model in this way. The second method, which is commonly referred as system identification, could be a useful short cut for deriving mathematical models. Although system identification does not always result in an accurate model a satisfactory model can often be obtained with reasonable efforts. The main drawback is the requirement to conduct a practical experiment, which brings the system through its range of operation [5], [6]. Neural networks (NN) has grown to be a well-established methodology, which allows for solving very difficult problems in engineering, as exemplified by their applications to identification and control of general nonlinear and complex systems [7], [8], [9]. The neural networks, which involve dynamic elements in the form of feedback connections, are known as recurrent neural networks (RNN) [10]. Recurrent high order neural networks (RHONN) offer many advantages for modeling of complex nonlinear systems such as their excellent approximation capabilities and robustness against noise [7], [8], [9]. The main training approach for RNN is the back propagation through time learning which is a first order gradient descent method and it is known that its learning speed could be very slow [11], [12]. On the other hand, training methods based on Kalman filtering have been proposed in the literature [13], [14]. Due to the fact that training a neural network typically results in a nonlinear problem, the extended Kalman Filter is a common tool to use, instead of a linear Kalman filter [12], [14]. Extended Kalman filter (EKF) training for neural networks reduce the epoch size and the number of required neurons and the learning convergence is improved [12]. The EKF training of neural networks has proven to be reliable and practical [11]. The DS1104 R&D Controller Board is a board designed for the development of high-speed multivariable digital controllers and real-time simulations. Connector and panels provide access to input and output signals of the board. dSPACE RTI software library allows to communicate MATLAB/Simulink with the controller board, and provides functions to monitor the signals [15]. This paper presents the use of a recurrent high order neuronal network for real-time identification of a linear induction motor using a dSPACE DS1104 R&D controller board on MATLAB/Simulink with the dSPACE RTI software library. The training algorithm for the neural network is based on the extended Kalman filter [4], [13]. Real-time Discrete Neural Identifier for a Linear Induction Motor using a dSPACE DS1104 Board Jorge D. Rios, Alma Y. Alanis, Jorge Rivera, Miguel Hernandez-Gonzalez L Proceedings of International Joint Conference on Neural Networks, Dallas, Texas, USA, August 4-9, 2013 978-1-4673-6129-3/13/$31.00 ©2013 IEEE 2890

Transcript of Real-time Discrete Neural Identifier for a Linear...

Page 1: Real-time Discrete Neural Identifier for a Linear ...geza.kzoo.edu/~erdi/IJCNN2013/HTMLFiles/PDFs/P420-1119.pdfdrawback is the requirement to conduct a practical experiment, which

Abstract— This paper presents a real-time discrete nonlinear neural identifier for a Linear Induction Motor (LIM). This identifier is based on a discrete-time recurrent high order neural network (RHONN) trained on-line with an extended Kalman filter (EKF)-based algorithm. A reduced order observer is used to estimate the secondary fluxes. The real-time implementation of the neural identifier is implemented by using dSPACE DS1104 controller board on MATLAB/Simulink with dSPACE RTI library and its performance is shown by graphs.

Keywords— Extended Kalman Filtering, Linear Induction Motor, Recurrent High Order Neural Networks, Real-time Neural Identification.

I. INTRODUCTION INEAR induction motors (LIMs) are special electrical machines, in which the electrical energy is converted directly into mechanical energy of translatory motion.

Interest on these machines raised in the early 1970 [1]. In the late 1970, the research intensity and number of publications dropped. After 1980, LIMs found their first noticeable applications in, among others, transportation, industry, automation, and home appliances [1], [2]. LIMs have many excellent performance features such as high-starting thrust force, elimination of gears between motor and motion devices, reduction of mechanical loses and the size motion device, high speed operation, silence and so on [2], [3]. The driving principles of the linear induction motor (LIM) are similar to the traditional rotary induction motor (RIM), but its control characteristics are more complicated than the RIM, and the parameters are time varying due to the change of operating conditions, such as speed, temperature, and rail configuration [4].

On the other hand, modern control systems usually require a very structured knowledge about the system to be controlled; such knowledge should be represented in terms of differential or difference equations. This mathematical description of the dynamic system is it known as the model of the system. There can be several motives for establishing mathematical descriptions of dynamic systems, such as: simulation, prediction, fault detection, and control system design among others.

Basically, there are two ways to obtain a model; it can be derived in a deductive manner using laws of physics, or it can be inferred from a set of data collected during a practical experiment. The first method can be simple, but in many

The authors are with the Centro Universitario de Ciencias Exactas e

Ingenierías, Universidad de Guadalajara, Apartado Postal 51-71, Col. Las Águilas, Zapopan, Jalisco, C.P. 45080, México, e-mail [email protected].

cases is excessively time-consuming; it would be unrealistic or impossible to obtain an accurate model in this way. The second method, which is commonly referred as system identification, could be a useful short cut for deriving mathematical models. Although system identification does not always result in an accurate model a satisfactory model can often be obtained with reasonable efforts. The main drawback is the requirement to conduct a practical experiment, which brings the system through its range of operation [5], [6].

Neural networks (NN) has grown to be a well-established methodology, which allows for solving very difficult problems in engineering, as exemplified by their applications to identification and control of general nonlinear and complex systems [7], [8], [9]. The neural networks, which involve dynamic elements in the form of feedback connections, are known as recurrent neural networks (RNN) [10].

Recurrent high order neural networks (RHONN) offer many advantages for modeling of complex nonlinear systems such as their excellent approximation capabilities and robustness against noise [7], [8], [9]. The main training approach for RNN is the back propagation through time learning which is a first order gradient descent method and it is known that its learning speed could be very slow [11], [12]. On the other hand, training methods based on Kalman filtering have been proposed in the literature [13], [14]. Due to the fact that training a neural network typically results in a nonlinear problem, the extended Kalman Filter is a common tool to use, instead of a linear Kalman filter [12], [14].

Extended Kalman filter (EKF) training for neural networks reduce the epoch size and the number of required neurons and the learning convergence is improved [12]. The EKF training of neural networks has proven to be reliable and practical [11].

The DS1104 R&D Controller Board is a board designed for the development of high-speed multivariable digital controllers and real-time simulations. Connector and panels provide access to input and output signals of the board. dSPACE RTI software library allows to communicate MATLAB/Simulink with the controller board, and provides functions to monitor the signals [15]. This paper presents the use of a recurrent high order neuronal network for real-time identification of a linear induction motor using a dSPACE DS1104 R&D controller board on MATLAB/Simulink with the dSPACE RTI software library. The training algorithm for the neural network is based on the extended Kalman filter [4], [13].

Real-time Discrete Neural Identifier for a Linear Induction Motor using a dSPACE DS1104 Board

Jorge D. Rios, Alma Y. Alanis, Jorge Rivera, Miguel Hernandez-Gonzalez

L

Proceedings of International Joint Conference on Neural Networks, Dallas, Texas, USA, August 4-9, 2013

978-1-4673-6129-3/13/$31.00 ©2013 IEEE 2890

Page 2: Real-time Discrete Neural Identifier for a Linear ...geza.kzoo.edu/~erdi/IJCNN2013/HTMLFiles/PDFs/P420-1119.pdfdrawback is the requirement to conduct a practical experiment, which

II. MATHEMATICAL PRELIMINARIES

A. Discrete-time high Order Neural Networks Consider a MIMO nonlinear system: ( 1) ( ( ), ( ))� �x k F x k u k (1)

where is used as the step sampling, , ,

and is a nonlinear function. The use of multilayer neural networks is well known for

patter recognition and for modeling of static systems. The NN is trained to learn an input-output map. Theoretical works have proven that, even with just one hidden layer, a NN can uniformly approximate any continuous function over a compact domain, provided that the NN has a sufficient number of synaptic connections [12].

For control task, extensions of the first order Hopfield model called Recurrent High Order Neural Networks (RHONN), which present more interactions among the neurons, are proposed in [16], [17]. Additionally, the RHONN model is very flexible and allows to incorporate to the neural model a priory information about the system structure [14].

Consider the following discrete-time recurrent high order neural network:

1( 1) ( ( ), ( ))T

i ix k z x k u k�� � 1, ,� �i n (2)

where is the state of the i-th neuron, is the respective on-line adapted weight

vector, and is given by (3)

11

22

(1)

(2)

( )

( ( ), ( ))

� �

� �� � � �� � � �� �� � � �� � � �� � � � �� � �

� 2��� �2��� 2�

�������� �

���

���

i j

j

i j

j

iL j

j

L

di

j Ii

dii

j Ii

di Li

j I

z

zz x k u k

z

(3)

1

1

1

1

( )

( )

��

� � � �� � � �� � � �� �� � � �� � � �� � � �� � � �� �� � � �

� �������

���

����

���

������

� ���������

���

n

n

n m

i

i ni

i

mi

S x

S xu

u

(4)

tan) h(( )���S x x (5)

1( 1) ( ( ), ( ))Ti i zik z x k u k� � �� � � 1, ,� �i n (6)

where is the respective number of high order connections,

is a collection of non-ordered subsets of

, with being nonnegative integers and is defined as (4); is the input vector

to the neural network (NN), and is defined by (5) where is any real value variable. Consider the problem to approximate the general

nonlinear system (1), by the discrete-time RHONN series-parallel representation (6) [17], where is the i-th plant state, is a bounded approximation error, which can be reduced by increasing the number of adjustable weights [17]. Assume that there exists an ideal weight vector such that

can be minimized on a compact set . The ideal weight vector is an artificial quantity required for analytical purpose [17]. In general, it is assumed that this vector exist and is constant but unknown.

B. The EKF Training Algorithm The best well-known training approach for recurrent neural

networks (RNN) is the back propagation through time learning [18]. However, it is a first order gradient descent learning method and hence its learning speed can be very slow [19]. Recently, extended Kalman filter (EKF) based algorithms have been introduced to train neural networks [20], [21]. With the EKF-based algorithm, the learning convergence is improved [18]. The EKF training of neural networks, both feedforward and recurrent ones, has proven to be reliable and practical for applications over the past eleven year [21].

It is known that Kalman filtering (KF) estimates the state of a linear system with additive state and output white noise [22], [23]. For EKF-based neural network training, the network weights become the states to be estimated. In this case, the error between the neural network output and the measured plant output can be considered as additive white noise. Due to the fact that the neural network mapping is not linear, an EKF-type is required [24].

The training goal is to find the optimal weight values which minimize the predictions error. It is used an EKF-based training algorithm described by (7) and (8) [11].

( 1) ( ) ( ) ( )( ) ( ) ( ) ( )

( 1) ( ) ( ) ( ) ( ) ( )1, ,

� � �� � ��

� � � �� ,

i i i i i

i i i iT

i i i i i i

k k K k e kK k P k H k M k

P k P k K k H k P k Q ki n

(7)

1( ) [ ( ) ( ) ( ) ( )]

( ) ( ) ( )�

�� �� �

Ti i i i i

i i i

M k R k H k P k H ke k x k k

(8)

( )( )

��

� ��� � ��� �� �

T

iij

ij

kHk

(9)

where is the respective identification error,

is the weight estimation error covariance matrix at the step , is the weight (state) vector, is the i-th neural network state, is the Kalman gain vector, is the NN weight estimation noise

2891

Page 3: Real-time Discrete Neural Identifier for a Linear ...geza.kzoo.edu/~erdi/IJCNN2013/HTMLFiles/PDFs/P420-1119.pdfdrawback is the requirement to conduct a practical experiment, which

covariance, and is the error noise covariance, and is a vector, in which each entry is the

derivative of the neural network state , with respect to one neural network weight, , given as (9) where

and . Usually and are initialized as diagonal matrices, with entries and , respectively. It is important to remark that , and

for the EKF are bounded [23].

III. LINEAR INDUCTION MOTOR MODEL The α – β model of a LIM is discretized by the Euler

technique [4], [25], [26] as follows

2 1 1

1 2

1 2

1 3

6 4 1

4 1 5 2

4 2 4

( 1) ( ) ( )( 1) (1 ) ( ) ( ) ( )

( ) ( )( ) ( )( ) 1 ( )

( 1) (1 ) ( ) ( ) ( )( ) ( )( ) ( )

� �

� �

� �

� �

� � �

� �

� �� �� �� �

� � �� ��

� � � �� � � �

��� �

� � � �� �� �

m m

r s

r s

r s

r s L

r r s

s s

s

q k q k v kv k k T v k k T k i k

k T k i kk T k i kk T k i k k TF

k k T k k Tv k i kk T i k k T i kk T i k k Tv k 2

5 1

6 4 2

4 2 5 1

4 1 4 1

5 2

9 7 2

8 1 7 1

8 2

( )( )

( 1) (1 ) ( ) ( ) ( )( ) ( )( ) ( ) ( )( )

( 1) (1 ) ( ) ( )( ) ( ) ( )( ) ( )

� � �

� �

� �

� � �

� �

��

� � �� �� ��

� �� � � �� �

�� � � �

� �� ��

� � � �� ��

s

s

r r s

s s

s s

s

s s r

r r

r

i kk T i k

k k T k k Tv k i kk T i k k T i kk T i k k Tv k i kk T i k

i k k T i k k T kk T k v k k T kk T k v k 10

9 8 2

7 1 7 2

8 1 10

( )( 1) (1 ) ( ) ( ) ( )

( ) ( )( ) ( ) ( )

� � �

� �

� �

� �� � � �� �

�� � � �

� �� �

s s r

r r

r

k Tu ki k k T i k k T k v k

k T k k T kk T k v k k Tu k

(10)

1 sin( ( ))p mn q k� � 2 cos( ( ))p mn q k� �

1p sr

m r

n Lk

D L� 2

m

m

RkD

� 31

m

kD

4 p srk n L� 5r sr

r

R LkL

� 6r

r

RkL

7 2( )sr r

r sr s r

L RkL L L L

��

8 2sr p

sr s r

L nk

L L L�

2 2

9 2( )r s sr r

r sr s r

L R L RkL L L L

��

� 10 2

r

sr s r

LkL L L

��

where is the position, is the linear velocity,

and are the α-axis and β-axis secondary flux, respectively, and are the α-axis and β-axis primary current, respectively, and are the α-axis and β-axis primary voltage, is the winding resistance per phase, is the secondary resistance per phase, is the magnetizing inductance per phase, is the primary inductance per phase, is the secondary inductance per phase, is the load disturbance, is the viscous friction and iron-loss coefficient and is the number of poles pairs,

is the sample period [4].

IV. FLUX OBSERVER It is necessary to consider the fluxes cannot be measured.

In order to overcome this problem, it is used a reduced order observer [13], [27]. The flux dynamics in (10) can be written as [28].

6 4

4 4

( 1) ( ) - ( ) - ( )

( ) ( ) ( )

� � � � �

� �

Ts

T Ts s

k k k T k k T JI k

k T JI k v k k T I k (11)

with

cos( ) sin( )( )

sin( ) cos( )p m p m

p m p m

n q n qk

n q n q�

� � � �

, 0 11 0

J�

� � � �

,

( )( )=

( )r

r

kk

k�

!!

� � � �

, ( )

( )=( )

s

s

I kI k

I k�

� � �

The observer proposed in [28] is (12)

6 4

4 4

( 1) ( ) ( ) ( )

( ) ( ) ( )

� � � � � � �

� � 6 4( 1) ( ) ( )6 4�( 1) ( ) ( )1) ( ) ( )6 4

Ts

T Ts s

k k k T k k T JI kk T JI k v k k T I k

(12)

The proof of convergence of this observer can be found in

[13], [27], [28].

V. LIM RHONN IDENTIFIER The proposed RHONN Identifier for the LIM model (10)

is defined as:

1 11 12

13

14 1

14 2

15 2

15 1

2 21 22

23

( 1) ( ) ( ( )) ( ) ( ( ))( ) ( ( ))

( ) ( ( )) ( )( ) ( ( )) ( )

( ) ( ( )) ( )

( ) ( ( )) ( )

( 1) ( ) ( ( )) ( ) ( ( ))( ) ( (

� �

� �

� �

� �

� � � !� !

� ! �� ! �

� ! �

� ! �

� � � !� !

� � ��

��

� � ��

r

r

r s

r s

r s

r s

r

r

k k S v k k S kk S kk S k i kk S k i kk S k i kk S k i k

k k S v k k S kk S k 24 2

25 1

3 31 32

33 34 1

35 2

4 41 42

43 44

45

5

)) ( ) ( )

( ) ( )

( 1) ( ) ( ( )) ( ) ( ( ))( ) ( ( )) ( ) ( )

( ) ( )

( 1) ( ) ( ( )) ( ) ( ( ))( ) ( ( )) ( ) ( ( ))

( ) ( )(

� �

� �

� �

� �

� � � !� ! � �

� �

� � � !� ! �

��

� � �� �

� � �� �

s

s

r

r s

s

r

r s

k i kk i k

k k S v k k S kk S k k i kk i k

k k S v k k S kk S k k S i kk u k

k 51 52

53 54

55

6 65 66

1) ( ) ( ( )) ( ) ( ( ))( ) ( ( )) ( ) ( ( ))

( ) ( )

( 1) ( ( )) ( )

� �

� � !� ! �

� � �

� � �� �

� � �

r

r s

m

k S v k k S kk S k k S i kk u k

k S q k v k (13)

2892

Page 4: Real-time Discrete Neural Identifier for a Linear ...geza.kzoo.edu/~erdi/IJCNN2013/HTMLFiles/PDFs/P420-1119.pdfdrawback is the requirement to conduct a practical experiment, which

where identifies the linear velocity, and identifies the α-axis and β-axis secondary flux, respectively,

and identifies the α-axis and β-axis primary current, respectively, identifies the position.

The weights are update by (7), is given by (5), the weights , , , , ,

, and remain fixed, more over , where is

a constant. It is important to note that for identifier (13) the

mathematical model (10) is considered unknown.

VI. REAL-TIME TEST For the real-time test the model shown in Fig. 1 is built in

MATLAB/Simulink, and then implemented in dSPACE board using the software interface, the LIM model used is LAB – Volt® 8228 (Fig. 2). The inputs to the LIM are a frequency-variant signal given by the autotransformer (Fig. 3) and a PWM control coupled signal through a dSPACE DS1104 Board (Fig. 4) and RTI 1104 Connector Panel (Fig 5). Both signals go through a power module shown (Fig. 6) which allows managing needed voltages and currents to move the secondary of the LIM.

α-axis and β-axis currents are obtained from the three actual LIM currents converted to the α-β model [4], position and velocity are obtained from the precision linear encoder SENC 150 mounted on the LIM.

The fixed weights values are: , , , y .

The values of position, velocity and currents are the input parameters to observer which calculates α – flux and β – flux. Afterwards, position, velocity, currents and fluxes are the inputs to the identifier which makes use of the EKF-based training algorithm (7), (8), (9) in order to adjust its weights at each sampling time .

Figs. 7 to 12 show the identification errors between the neural identifier (13) and the LIM for a real-time test with a sampling time of s and a total time of s. Table 1 shows the root mean square errors (RMSE) between the neural identifier (13) and the LIM state signals.

TABLE I LIM ROOT MEAN SQUARE ERRORS

Identification error RMSE

Velocity 0.0089 α-flux 5.2903e-005 β-flux 8.3943e-005 α current 0.2063 β current 0.1657 Position 1.4944e-005

Fig. 1. MATLAB/Simulink Model.

Fig. 2. Linear Induction Motor LAB – Volt® 8228.

Fig. 3. Autotransformer.

Fig. 4. dSPACE DS1104 Controller Board.

2893

Page 5: Real-time Discrete Neural Identifier for a Linear ...geza.kzoo.edu/~erdi/IJCNN2013/HTMLFiles/PDFs/P420-1119.pdfdrawback is the requirement to conduct a practical experiment, which

Fig. 5. RTI 1104 Connector Panel

Fig. 6. Power Module.

Fig. 7. Velocity identification error.

Fig. 8. α-flux identification error.

Fig. 9. β-flux identification error.

Fig. 10. α current identification error.

Fig. 11. β current identification error.

Fig. 12. Position identification error.

2894

Page 6: Real-time Discrete Neural Identifier for a Linear ...geza.kzoo.edu/~erdi/IJCNN2013/HTMLFiles/PDFs/P420-1119.pdfdrawback is the requirement to conduct a practical experiment, which

VII. CONCLUSION This paper proposes the use of RHONN for real-time

identification of a LIM. The proposed RHONN identifier is trained on-line using an EKF-based algorithm. The proposed identifier is implemented on real-time using a dSPACE DS1104 Board, showing its applicability. Currently, the authors are working on improving the neural identifier performance and developing and identifier controller scheme.

ACKNOWLEDGMENT The authors thank the support of CONACYT Mexico,

through Project 103191Y.

REFERENCES [1] F. J. Gieras, Linear Induction Drivers. Oxford University Press,

Oxford, England, 1994. [2] I. Boldea and S. A. Nasar, Linear Electric Actuators and Generators.

Cambridge University Press, Cambridge, England, 1997. [3] I. Takahashi and Y. Ide, “Decoupling control of thrust and attractive

force of a LIM using a space vector control inverted”, IEEE Trans. Ind. Applicat., vol. 29, pp 161-167, Jan/Feb. 1993.

[4] V. H. Benitez Neural Block Control: Application to a Linear Induction Motor, in Spanish, Master Thesis, Cinvestav, Unidad Guadalajara, Mexico, 2002.

[5] J. A. Farrell and M. M. Polycarpou, Adaptive approximation Based Control: Unifying Neural, Fuzzy and Traditional Adaptive Approximation Approaches, John Wiley and Sons, N. Y., USA, 2006.

[6] M. Norgaard, O. Ravn, N. K. Poulsen and L. K. Hansen, Neural Networks for Modelling and Control of Dynamic Systems, Springer Verlag, New York, USA, 2000.

[7] S. Jagannathan and F. L. Lewis, “Identification of nonlinear dynamical systems using multilayered neural networks”, Automatica, vol. 32, no. 12, pp. 1707-1712, 1996.

[8] F. L. Lewis, S. Jagannathan, and A. Yesildirek, “Neural Network Control of Robot Manipulators and nonlinear systems”, Taylor and Francis, London, 1999.

[9] W. Yu and X. Li, “Nonlinear system identification using discrete-time recurrent neural networks with stable learning algorithms”, Information Science, no. 158, pp. 131147, 2004.

[10] E. B. Kosmatopoulos, M. M. Polycarpou, M. A. Christodoulou and P. A. Ioannou, “High-order neural network structures for identification of dynamical systems”, Neural Networks, vol. 6, no. 2, pp 422-431, 1995.

[11] R. J. Williams and D. Zipser, A learning algorithm for continually running fully recurrent neural networks, Neural Computation, 1, 270-280, 1989.

[12] S. Haykin, Kalman Filtering and Neural Networks, John Wiley and Sons, N. Y., USA, 2001.

[13] E. N. Sanchez, A. Y. Alanis and A. G. Loukianov, Discrete-Time High Order Neural Control: Trained with Kalman Filtering, Berling, Germany: Springer, 2008.

[14] A. Y. Alanis, E. N. Sanchez and A. G. Loukianov, “Discrete-time recurrent neural induction motor control using Kalman learning”, Proceedings of the International Joint Conference on Neural Network 2006, Vancouver, Canada, July, 2006.

[15] dSPACE DS1104. Hardware Installation and Configuration and ControlDesk Experiment Guide, Paderborn, Germany, 2009.

[16] K. S. Narendra and K. Parthasarathy, “Identification and control of dynamical systems using neural networks”, IEEE Transactions on Neural Networks, vol. 1, pp. 4-27, 1990.

[17] G. A. Rovithakis and M. A. Chistodoulou, Adaptive Control with Recurrent High-Order Nueral Networks, Springer Verlag, Berling, Germany, 2000.

[18] R. J. Willians and D. Zipser, “A learning algorithm for continually running fully recurrent neural networks”, Neural Computation, vol. 1, pp. 270-280, 1989.

[19] C. Leunga, and L. Chan, “Dual extended Kalman filtering in recurrent neural networks”, Neural Networks, vol. 6, pp. 223-239, 2003.

[20] A. Y. Alanis, E. N. Sanchez and G. A. Loukianov, “Real-time discrete neural block control using sliding modes for electric induction motors”, IEEE Transactions on Control Systems Technology, vol. 18, no. 1, pp. 11-21, January, 2010.

[21] L. A. FeldKamp, D. V. Prokhorov and T.M. Feldkamp, “Simple and conditioned adaptive behavior from Kalman filter trained recurrent networks”, Neural Networks, vol. 16, pp. 683-689, 2003.

[22] R. Grover and P. Y. C. Hwang, Introduction to Random Signals and Applied Kalman Filtering, 2nd ed., John Wiley and Sons, N. Y., USA, 1992.

[23] Y. Song and J. W. Grizzle, “The extended Kalman filter as local asymptotic observer for discrete-time nonlinear systems”, Journal of Mathematical Systems, Estimation and Control, vol. 5, no, 1, pp.59-78, 1995.

[24] A. S. Poznyak, E. N. Sanchez and W. Yu, Differential Neural Networks for Robust Nonlinear Control, World Scientific, Singapore, 2001.

[25] N. Kazantzis, c. Kravaris, “Time-discretization of nonlinear control systems via Taylor methods”, Computer and Chemical Engineering, vol. 23, pp 763-784, 1999.

[26] M. Hernandez-Gonzalez, E. N. Sanchez and A. G. Loukianov, “Discrete-time Neural Network Control for a Linear Induction Motor,” in 2008 IEEE International Symposium on Intelligent Control Part of 2008 IEEE Multi-conference on Systems and Control San Antonio, Texas, USA, September 3-5, 2008.

[27] A. G. Loukianov, J. Rivera and J. M. Cañedo, “Discrete time sliding mode control of an induction motor”, Proceedings IFAC’02, Barcelona, Spain, July, 2002.

[28] M. Hernandez-Gonzalez, Discrete-time Neural Control for the Linear Induction Motor, in Spanish, Master Thesis, Cinvestav, Unidad Guadalajara, Mexico, 2008.

2895