PREDICTION USING ARTIFICIAL NEURAL NETWORKS...PREDICTION USING ARTIFICIAL NEURAL NETWORKS Rajasekhar...

18
PREDICTION USING ARTIFICIAL NEURAL NETWORKS Rajasekhar Nannapaneni Senior Advisor Solutions Architect Dell EMC [email protected] Knowledge Sharing Article © 2017 Dell Inc. or its subsidiaries.

Transcript of PREDICTION USING ARTIFICIAL NEURAL NETWORKS...PREDICTION USING ARTIFICIAL NEURAL NETWORKS Rajasekhar...

PREDICTION USING ARTIFICIAL NEURAL NETWORKS

Rajasekhar NannapaneniSenior Advisor Solutions ArchitectDell [email protected]

Knowledge Sharing Article © 2017 Dell Inc. or its subsidiaries.

2018 Dell EMC Proven Professional Knowledge Sharing 2

Table of Contents

List of Figures ....................................................................................................................................... 3

List of Abbreviations ............................................................................................................................ 3

Abstract ................................................................................................................................................ 4

A1.1 Cognitive capabilities of ANN’s:................................................................................................... 5

A1.2 Assumptions of neural networks: ................................................................................................ 5

A1.3 ANN’s are successful in delivering human like tools: .................................................................. 6

A1.4 ANN’s Outlook: ............................................................................................................................ 7

B 1.1 Mackey-glass chaotic time series prediction: ............................................................................. 7

C1.1 Architecture of artificial neural network: .................................................................................... 8

C1.2 The training algorithm for the feedforward MLP: ....................................................................... 9

C1.2.1 Forward Phase: ....................................................................................................................... 10

C1.2.2 Reverse Phase: ........................................................................................................................ 12

C1.2.3 Neural network Weight Updates: ........................................................................................... 12

D1.1 Results of time series prediction: .............................................................................................. 13

D1.2 Conclusion: ................................................................................................................................ 14

References ......................................................................................................................................... 15

Appendix ............................................................................................................................................ 16

Appendix – MATLAB code for Mackey-Glass chaotic time series ...................................................... 16

2018 Dell EMC Proven Professional Knowledge Sharing 3

List of Figures Figure No. Title of the figure Pg.No.

Figure 1 Feed forward multi-layer perceptron neural network architecture 8

Figure 2 Plot of Mackey glass time series prediction in MATLAB 14

Figure 3 Plot of mean square error during training process 15

List of Abbreviations

Abbreviation Full Form

ANN’s

MLP

Artificial Neural Networks

Multi-Layer Perceptron

MSE Mean Square Error

Disclaimer: The views, processes or methodologies published in this article are those of the author.

They do not necessarily reflect Dell EMC’s views, processes or methodologies.

2018 Dell EMC Proven Professional Knowledge Sharing 4

Abstract Prediction of future values of any physical or logical parameters would enable humans to plan better,

exploit opportunities, forecast better, optimize resources and make better decisions so that we can

enhance the way we live.

Time series prediction is based on modelling future values of any specific parameter based on its

current data. Time series-based models are average rainfall forecast, internet traffic rates, business

trend forecasting, weather forecast, contagious disease spread, etc.

Artificial neural networks (ANN) are increasingly becoming successful in predicting the time series data

through regression-based models. In this article, a benchmark timeseries model called Mackey-Glass

chaotic time series is predicted using feed forward multi-layer perceptron neural network with error

back propagation algorithm.

The architecture and algorithm developed in this article can easily be extended to any real world

timeseries prediction requirements.

2018 Dell EMC Proven Professional Knowledge Sharing 5

A1.1 Cognitive capabilities of ANN’s

Artificial neural networks are biologically inspired as its building block – artificial neuron – derived from

McCulloch Pitts model which tries to emulate biological neuron. Though artificial neural networks are not

able to emulate perfect behavior of a human brain, they could perform a number of other tasks that

humans do better than the traditional programming or rule-based systems.

Artificial neural networks can be designed with various combinations of interconnection of neurons

resulting in various topologies or structures with varying computations and connection weights that can

perform powerful human-like tasks.

ANN’s are naturally good in pattern (image or speech) recognition and association tasks where they almost

behave like or sometimes even better than humans. Their importance can be understood as they tend to

classify/categorize unseen data and are also able to predict unknown future trends. They learn by examples

and map inputs to output patterns. They are also fault tolerant and are robust in executing the tasks.

In some cases, artificial neural networks could face limitation of overfitting where the network results in

correct outputs during training but give incorrect results during the production. So, training them with a

variety of datasets is crucial so that the networks are robust.

A1.2 Assumptions of neural networks

Artificial neural networks are parallel structures which are constructed using a building block called artificial

neuron derived from McCulloch Pitts model which is designed to emulate biological neuron. The McCulloch

Pitts model of artificial neuron assumes a simple model and doesn’t match with the complexities of

biological neuron and thus the artificial neural networks have not achieved the perfect near human brain

emulation.

Artificial neural networks start with random initial behavior and can gradually get trained either with a

teacher (supervised) or without (unsupervised) depending on the application and scenario. The layers in

the network help in extracting the features of the inputs or in other words, layers within a network map

input space into feature space that can extract relevant logic needed for the network to result in desired

output. This training procedure is assumed to be similar to that of humans however, the logic determined

and information within the network remains a black box which is a major limitation of neural network.

2018 Dell EMC Proven Professional Knowledge Sharing 6

The topological connectivity of perceptron’s is another assumption from the interconnectivity of biological

neurons. As per professor Hebb, the neuron to neuron bonding improves as each of the neurons gets

activated due to the other. The same mechanism is assumed for artificial neural network where the

synaptic weights keep increasing in a competitive learning method when each of the neurons gets

activated due to the other. [1]

Neural networks also assume the adaptive nature of human behavior with changing environments. The

neural networks dynamically adapt to new inputs and accordingly adjust or modify the weights.

The feedback mechanism in neural networks is associated with memory which is another assumption of

human brain having memory.

There is also an inherent spatial influence of one neuron over the other in artificial neural networks similar

to a human brain where biological neurons related to a specific sense or task are spatially located in one

part of the brain and the neurons in that region influence one over the other.

A1.3 ANN’s are successful in delivering human like tools

Artificial neural networks have been reasonably successful in delivering specific tool sets which could

emulate human like behavior.

Artificial neural networks behave in some ways like humans, for instance when a neural network is newly

developed it behaves randomly like that of a human child. As the network is trained, the neural networks

become more and more result oriented similar to a child being trained.

There is also the memory part where the child remembers the learned experience so that it can be applied

later which is analogous to neural network which hardcodes the learnings in the form of synaptic weights

and feedback mechanism which could later be re-applied in real time. In fact, there is a possibility for

humans to forget the learnt things but a neural network doesn’t which can be considered superior over

humans in this perspective. [4][5]

Humans tend to pioneer in a specific field of learning instead of learning all fields as it’s difficult to master

in all subjects and similarly the neural networks are designed and implemented for specific tasks. A specific

neural network can become more efficient in performing a task when it is trained and exposed to more and

more data which is similar in the case of humans who can keep learning to improve overall intelligence.

2018 Dell EMC Proven Professional Knowledge Sharing 7

In many applications such as character recognition, a neural network once trained with sufficient data

could produce more accurate results compared to humans where there is a scope for human errors.

Neural networks have been successful in mapping a set of input dimensions to a set of output dimensions

where it is efficient in doing so and is useful in a wide variety of applications. [4][5]

Though there are a few limitations of neural networks like overfitting, instability, etc. there are ways to

overcome these. As today’s world is moving towards handling Big Data which has volume, veracity and

variety in data, neither the human interpretation manually or the traditional programming involving rule-

based systems can help in addressing applications such as prediction, character recognition, face

recognition, medical, security, etc. Neural networks can be robust in handling huge datasets accurately in

nonlinear dynamic environments.

A1.4 ANN’s Outlook

Artificial neural networks are looking to be more capable in today’s world of computing as the volume of

data involved in analysis is huge where the traditional rule-based systems and programming is inefficient.

As the dimensionality of the inputs (data) increase, the need for new approaches also increased due to

inherent challenges with existing tools. This space for new generation tools can be fulfilled by artificial

neural networks.

Newer training methods like metaheuristic approaches as training mechanisms for neural networks are

beginning to provide better results than existing training methods. Artificial neural networks with more

number of layers are now becoming popular and are called deep learning networks in areas especially

involving Big Data.

The parallel structure of artificial neural networks results in speedy response and hence are naturally

suitable for real time systems. It can be concluded that the artificial neural networks have great potential

to be adapted to solve complex problems now and in future.

2018 Dell EMC Proven Professional Knowledge Sharing 8

B 1.1 Mackey-Glass chaotic time series prediction:

The prediction of timeseries has always been a subject of interest as it can be related closely with many

real time models involving forecast. Mackey-Glass time series is one such dynamic chaotic timeseries which

is used as a benchmark for prediction. Mackey-Glass time series is a nonlinear chaotic timeseries whose

equation is given as (1).

(1)

The Mackey-Glass time series is predicted using the feed forward neural network using the error back

propagation algorithm efficiently which could later be extended to other real-time timeseries.

C1.1 Architecture of artificial neural network:

The Mackey-Glass time series can be predicted using feed forward multi-layer perceptron neural network.

The feed forward neural network that was architected for this prediction has two hidden layers where the

1st hidden layer has 6 neurons, 2nd hidden layer has 3 neurons and the output layer has 1 neuron.

Figure 1 details the architecture considered for this neural network.

Figure 1: Feed forward multi-layer perceptron neural network architecture

2018 Dell EMC Proven Professional Knowledge Sharing 9

The neurons in Figure 1 are fully connected such that the all inputs at input layer are applied to all the 1st

hidden layer neurons, the outputs of 1st hidden layer neurons are applied to all the 2nd hidden layers and all

the outputs of 2nd hidden layers are applied to output layer neuron.

The activation function of the hidden layer neurons is log sigmoid function and the activation function of

the output layer is linear function.

The Mackey-Glass time series is a nonlinear function and hence to predict this function it is required to

introduce non-linearity throughout the network which can be accomplished by using log sigmoid as the

activation function. The log sigmoid function is differentiable and hence its derivative can be used to

compute the error during backpropagation.

Since the time series prediction is a regression, the usage of linear function as activation function at output

justifies that we need continuous real values rather than binary values.

The selection of 6 neurons in 1st hidden layer and 3 neurons in 2nd hidden layer is better justified in section

D1.1 where it is shown that the mean square error during training is of the order 10-5 which supports the

suitability of this architecture to predict the given time series.

The network architecture with single layers didn’t achieve the error of the order 10-5 and networks with

multiple layers or other architectures with more neurons increases the complexity and resource

requirements. Hence, the 2-hidden layer feed forward network fits just with right resources to predict and

simultaneously achieves reasonable mean square error.

C1.2 The training algorithm for the feedforward MLP

The training algorithm that was used in the feed forward neural network to predict the Mackey-Glass time

series is backpropagation.

There are multiple algorithms available however backpropagation algorithm is sufficient to achieve best

results for the prediction of time series. The backpropagation algorithm includes derivatives of the

activation function and since the neural network architecture considered in section C1.1 has differentiable

activation functions such as log sigmoid and linear function, it is relatively simple to use backpropagation

algorithm to solve the time series prediction.

2018 Dell EMC Proven Professional Knowledge Sharing 10

The MATLAB code for predicting time series is mentioned in the Appendix and the corresponding neural

network architecture is mentioned in Figure 1.

The objective is to predict (t+6)th value of time series using the t, (t-6)th, (t-12)th and (t-18)th values of the

time series. The Mackey-Glass time series has a total of 1201 values of which the first 600 points will be

used for training the network and the remaining 600 points will be used for testing the results of the

network.

The feedforward neural network with error back propagation has two phases called forward phase and

backward phase. In the forward phase the inputs are propagated forward towards the output and in the

backward phase the error is computed at the end of each layer and propagated forward towards the

inputs.

C1.2.1 Forward Phase:

The following are steps involved in forward training phase and the variables used match with the code in

Appendix and architecture in figure 1:

n=5 (number of inputs including the bias)

m=6 (number of neurons in 1st hidden layer)

l=3 (number of neurons in 2nd hidden layer)

r=1 (number of neurons in output layer)

X is input vector and is of the order of (n x 1) which is (5 x 1) in this case and given by,

X =

[

𝑖𝑛𝑝(𝑡)𝑖𝑛𝑝(𝑡 − 6)𝑖𝑛𝑝(𝑡 − 12)𝑖𝑛𝑝(𝑡 − 18)

𝑏 ]

W is the weight matrix of the 1st hidden layer and is of the order (m x n) which is (6 x 5) in this network and

given by,

W =

[ 𝑤11 𝑤12 𝑤13 𝑤14 𝑤15𝑤21 𝑤22 𝑤23 𝑤24 𝑤25𝑤31 𝑤32 𝑤33 𝑤34 𝑤35𝑤41 𝑤42 𝑤43 𝑤44 𝑤45𝑤51 𝑤52 𝑤53 𝑤54 𝑤55𝑤61 𝑤62 𝑤63 𝑤64 𝑤65]

2018 Dell EMC Proven Professional Knowledge Sharing 11

The activation vector ‘a’ for 1st hidden layer is of the order (m x 1) which is (6 x 1) and is given by,

a =

[ 𝑎1𝑎2𝑎3𝑎4𝑎5𝑎6]

= W x X =

[ 𝑤11 𝑤12 𝑤13 𝑤14 𝑤15𝑤21 𝑤22 𝑤23 𝑤24 𝑤25𝑤31 𝑤32 𝑤33 𝑤34 𝑤35𝑤41 𝑤42 𝑤43 𝑤44 𝑤45𝑤51 𝑤52 𝑤53 𝑤54 𝑤55𝑤61 𝑤62 𝑤63 𝑤64 𝑤65]

x

[

𝑖𝑛𝑝(𝑡)𝑖𝑛𝑝(𝑡 − 6)𝑖𝑛𝑝(𝑡 − 12)𝑖𝑛𝑝(𝑡 − 18)

𝑏 ]

The decision vector ‘c’ at the output of 1st hidden layer is obtained by passing activation vector ‘a’ through

log sigmoid function and hence is given by,

c =

[ 𝑐1𝑐2𝑐3𝑐4𝑐5𝑐6]

= 1

1+ 𝑒−𝒂

V is the weight matrix of the 2nd hidden layer and is of the order (l x m) which is (3 x 6) in this network and

given by,

V = [𝑣11 𝑣12 𝑣13 𝑣14 𝑣15 𝑣16𝑣21 𝑣22 𝑣23 𝑣24 𝑣25 𝑣26𝑣31 𝑣32 𝑣33 𝑣34 𝑣35 𝑣36

]

The activation vector ‘d’ for 2nd hidden layer is of the order (l x 1) which is (3 x 1) and is given by,

d = [𝑑1𝑑2𝑑3

] = V x c = [𝑣11 𝑣12 𝑣13 𝑣14 𝑣15 𝑣16𝑣21 𝑣22 𝑣23 𝑣24 𝑣25 𝑣26𝑣31 𝑣32 𝑣33 𝑣34 𝑣35 𝑣36

] x

[ 𝑐1𝑐2𝑐3𝑐4𝑐5𝑐6]

The decision vector ‘e’ at the output of 2nd hidden layer is obtained by passing activation vector ‘d’ through

log sigmoid function and hence is given by,

e = [𝑒1𝑒2𝑒3

] = 1

1+ 𝑒−𝒅

U is the weight matrix of the output layer and is of the order (r x l) which is (1 x 3) in this network and given

by,

U = [𝑢11 𝑢12 𝑢13]

The output y_out is a scalar and is given by,

y_out = U x e = [𝑢11 𝑢12 𝑢13] x [𝑒1𝑒2𝑒3

]

This concludes the forward phase of the training and error computation in reverse direction is detailed in

section c1.2.2

2018 Dell EMC Proven Professional Knowledge Sharing 12

C1.2.2 Reverse Phase:

In the backward phase the error is computed at each node of the network and is propagated back to front

of the network and weight vectors are adjusted accordingly.

y_des = inp(t+6) is the desired output and the y_out (the actual output in forward phase) has to align closer

to y_des. Hence the error at the output layer (ey) can be computed as,

ey= y_des – y_out

The decision error (ee) at the output of 2nd hidden layer is given by,

ee = U’ x ey = [𝑢11𝑢21𝑢31

] x ey

The activation error (ed) at the 2nd hidden layer is computed as,

ed = e (1- e) x ee = [𝑒1𝑒2𝑒3

] x (1 - [𝑒1𝑒2𝑒3

]) x ee

The decision error (ec) at the ouput of 1st hidden layer is given by,

ec = V’ x ed =

[ 𝑣11 𝑣12 𝑣13𝑣21 𝑣22 𝑣23𝑣31 𝑣32 𝑣33𝑣41 𝑣42 𝑣43𝑣51 𝑣52 𝑣53𝑣61 𝑣62 𝑣63]

x ed

The activation error (ea) at the 1st hidden layer is computed as,

ea = c (1- c) x ec =

[ 𝑐1𝑐2𝑐3𝑐4𝑐5𝑐6]

x (1 -

[ 𝑐1𝑐2𝑐3𝑐4𝑐5𝑐6]

) x ec

This concludes the error computation in reverse direction and section C1.2.3 details how the weights are

updated so that network is ready to be used in testing.

C1.2.3 Neural network Weight Updates

As the weights are computed in reverse direction the weight deltas are computed and weights are updated

accordingly.

There are two specific parameters called weight gain (ɣg) and weight momentum (ɣm) which can be used to

tune the priority of previous epoch delta weights and current epoch delta weights.

The equations for delta weights are given by,

del U (epoch) = ɣg x e’ x ey + ɣm x del U (epoch -1)

del V (epoch) = ɣg x c’ x ed + ɣm x del V (epoch -1)

del W (epoch) = ɣg x X’ x ea + ɣm x del W (epoch -1)

2018 Dell EMC Proven Professional Knowledge Sharing 13

The weights are updated in ONLINE training process rather than batch training to increase the efficiency in

predicting the time series. The weight update equations are given by,

U = U + del U

V = V + del V

W = W + del W

The training is performed for 8000 epochs and the weights are updated ONLINE after each training input

sample.

The training algorithm (error backpropagation), the number of epochs and training method used (ONLINE

training) can be easily justified for predicting the given time series as this is not only a simple approach but

also efficient. The error backpropagation is chosen as all the activation functions used by neurons in the

feedforward network are differentiable and the ONLINE training method resulted mean square error of the

order of 10-5 which can be considered efficient.

D1.1 Results of time series prediction

The MATLAB code mentioned in the Appendix predicts the 1201 sample Mackey-Glass time series. The

code is developed based on training algorithm mentioned in section C1.2.

Figure 2 details the plot of time series prediction in MATLAB where the actual Mackey-Glass time series

desired output is plotted in RED. The training output is in BLUE and is plotted for input samples from 1 to

600 and the testing output is represented in GREEN and is plotted for predicted samples from 601 to 1201.

Figure 2: Plot of Mackey glass time series prediction in MATLAB

2018 Dell EMC Proven Professional Knowledge Sharing 14

Figure 2 shows that the training output in BLUE very closely coincides with the desired output RED and

similarly the testing output in GREEN also very closely approximates the desired output in RED. Hence, the

testing can be said to be successful.

Figure 3 details the mean square error obtained during the training procedure at each epoch and it can be

seen that the mean square error decreases over the epochs. The total of 8000 epochs were sufficient and

the code has achieved the mean square error of 5.2 x 10-5 during testing phase.

Figure 3: Plot of mean square error during training process

D1.2 Conclusion

The Mackey-Glass time series can be efficiently predicted by multilayer perceptron network with error

backpropagation algorithm. There are multiple different network topologies and different algorithms to

predict this time series.

The mean square error achieved during testing phase was of the order 10-5 which is a very good measure to

state the efficiency of the network architecture (mentioned in Figure 1) and the algorithm used (detailed in

section C1.2).

It is also important to consider the number of training epochs used to train a network which in this case

was 8000 epochs that was sufficient to result in desired network. The network topology of 5 x 6 x 3 x 1 was

one of the appropriate networks to achieve the above mentioned MSE with 8000 epochs. There are other

tuning parameters such as learning gain and learning momentum which are tuned appropriately in the

CODE mentioned in the Appendix for this time series.

2018 Dell EMC Proven Professional Knowledge Sharing 15

It can be concluded that the multilayer perceptron network with two hidden layers and error back

propagation algorithm can predict this time series with reasonable computing power and simple

architecture.

References

[1]“Jack V. Tu, Advantages and disadvantages of using artificial neural networks versus logistic regression

for predicting medical outcomes, In Journal of Clinical Epidemiology, Volume 49, Issue 11, 1996, Pages

1225-1231, ISSN 0895-4356.”

[2]“Neural Networks: Capabilities & Applications ROBERT C.SASS Stanford Linear Accelerator Center,

Stanford University, Stanford, California 94309”

[3]“Šíma J. (2001) The Computational Capabilities of Neural Networks. In: Kůrková V., Neruda R., Kárný M.,

Steele N.C. (eds) Artificial Neural Nets and Genetic Algorithms. Springer, Vienna.”

[4]“Ubiquity, Volume 4, Issue 37, Nov. 12 - 18, 2003, http://www.acm.org/ubiquity/”

[5]“Urolagin, Siddhaling & kv, Prema & Reddy, N V Subba. (2011). Generalization Capability of Artificial

Neural Network Incorporated with Pruning Method. 7135. 171-178. 10.1007/978-3-642-29280-4_19.”

2018 Dell EMC Proven Professional Knowledge Sharing 16

Appendix Matlab code for Mackey-Glass chaotic time series

clear; clc; close all; %%%%%%%%%%%%%%%Variable Declaration%%%%%%%%%%%%%%%%%%%%%%%%%%%% n=5; %no of inputs r=1; %no of outputs m=6; %no of neurons in 1st hidden layer l=3; %no of neurons in 2nd hidden layer gamma_g=0.1; gamma_m=0.95; b=1; w=0.2*rand(m,n)-0.1; %weights of 1st hidden layer v=0.2*rand(l,m)-0.1; %weights of 2nd hidden layer u=0.2*rand(r,l)-0.1; %weights of output layer delw=zeros(m,n); delv=zeros(l,m); delu=zeros(r,l); load mgdata.dat; inp=mgdata(:,2); x=1:1:1201; y_out=zeros(1,1201); for t1=1:24 y_out(t1)=inp(t1); end %%%%%%%%%%%%%%%Training%%%%%%%%%%%%%%%%%%%%%%%%%%%% for epoch=1:8000 for t = 19:594 gamma_m=gamma_m*t/594; x1=inp(t-18); x2=inp(t-12); x3=inp(t-6); x4=inp(t); vec_x=[x1;x2;x3;x4;b]; a=w*vec_x; c=1./(1+exp(-a)); d=v*c; e=1./(1+exp(-d)); y_out(t+6)=u*e; y_des=inp(t+6);

2018 Dell EMC Proven Professional Knowledge Sharing 17

ey=y_des-y_out(t+6); ee=u' * ey; ed=e.*(1-e).*ee; ec=v' * ed; ea=c.*(1-c).*ec; error(t)=ey; delu=gamma_g*ey.*e' + gamma_m*delu; delv=gamma_g*ed.*c' + gamma_m*delv; delw=gamma_g*ea.*vec_x' + gamma_m*delw; u=u+delu; v=v+delv; w=w+delw; end train_mse(epoch) = mean((error).^2); end %%%%%%%%%%%%%%%Testing%%%%%%%%%%%%%%%%%%%%%%%%%%%% for t = 595:1195 z1=inp(t-18); z2=inp(t-12); z3=inp(t-6); z4=inp(t); vec_z=[z1;z2;z3;z4;b]; a1=w*vec_z; c1=1./(1+exp(-a1)); d1=v*c1; e1=1./(1+exp(-d1)); y_out(t+6)=u*e1; y_test(t+6)=u*e1; err(t)=inp(t+6)-y_test(t+6); end test_mse=mean((err).^2); %%%%%%%%%%%%%%%Output Plotting%%%%%%%%%%%%%%%%%%%%%%%%%%%% figure; plot(x,inp,'r-',x,y_out,'b:',x,y_test,'g:','lineWidth',2) axis([0 1201 0.2 1.4]); grid on xlabel('Input to the Neuron') ylabel('Output of the Neural Network') legend('Desired Output','Training Output','Testing Output','Location','North') title('Mackey-Glass Time Series Prediction') figure; plot(train_mse,'b'); grid on xlabel('epochs') ylabel('mse train error') title('Error during training') fprintf(1,'mse test error is %f',test_mse) ===================================================================

2018 Dell EMC Proven Professional Knowledge Sharing 18

Dell EMC believes the information in this publication is accurate as of its publication date. The information

is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” DELL EMC MAKES NO RESPRESENTATIONS

OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND

SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR

PURPOSE.

Use, copying and distribution of any Dell EMC software described in this publication requires an applicable

software license.

Dell, EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries.