Simple Neural Nets for Pattern Classification

63
Simple Neural Nets for Pattern Classification طة ي س لب ا ة ي ب ص ع ل ا كات ب س ل ا ط م ن ل ا ف ي ب ص ي ل ودي م ح م م ي ر ك ل د ا ب ع0 ان ب ب: ة س د ب ه م لاد ا اعد

description

Simple Neural Nets for Pattern Classification. الشبكات العصبية البسيطة لتصنيف النمط. اعداد المهندسة : بنان عبد الكريم محمودي. How are neural networks used? كيف تستخدم الشبكات العصبية. Typical architectures : It is convenient to visualize neurons as arranged in layers - PowerPoint PPT Presentation

Transcript of Simple Neural Nets for Pattern Classification

Page 1: Simple Neural Nets for Pattern Classification

Simple Neural Netsfor Pattern Classification

العصبية الشبكاتالبسيطة

النمط لتصنيفعبد : بنان المهندسة اعداد

محمودي الكريم

Page 2: Simple Neural Nets for Pattern Classification

How are neural networks used?العصبية الشبكات تستخدم كيف

Typical architectures : It is convenient to visualize neurons as arranged in layers

طبقات في مرتبة كانها العصبية الخاليا تصور او تخيل المناسب من Neurons in the same layer behave in the same manner

ريقةJالط بنفس تتصرف الطبقة نفس في العصبية الخاليا Key factors in determining the behavior of a neuron are its

activation function and pattern of weighted connections over which sends and receives signals

ونمط تفعيلها تابع هو العصبية الخالية سلوك تقرير في االساسية العناصراالشارات ويستقبالن يرسالن الذين االوزان ارتباطات

Within each layer neurons usually have the same activation function and same pattern of weighted connections to other neurons

االوزان نمط ونفس التفعيل تابع نفس لها عادة العصبية الخاليا طبقة كل ضمناالخرى بالخاليا المرتبطة

Page 3: Simple Neural Nets for Pattern Classification

Typical architectures :( المعمارية الهندسة(النموذجية

The arrangement of neurons into layers and the connection patterns within and between layers is called the net architecture

بين االرتباط وانماط Jالطبقات داخل العصبية الخاليا ترتيبيسمى Jالمعمارية الطبقات الشبكة هندسة

Many neural nets have an input layer in which the activation of each unit is equal to an external input signal

كل تفعيل Jحيث ادخال طبقة تملك العصبية Jالشبكات معظمالتالي الشكل مثال الخارجية الدخل الشارة مساوي وحدة

ووحدة اخراج Jووحدات ادخال وحدات من تتألف هنا الشبكةاخراج ) ( وال ادخال ليست مخJفية واحدة

Page 4: Simple Neural Nets for Pattern Classification

Typical architectures :( المعمارية الهندسة(النموذجية

Neural nets classified as ) العصبية الشبكات ) تصنف Single layer ) واحدة ) طبقة Multilayer ) الطبقات ) متعددة

In determining the number of layers the input units are not counted as a layer because they perform no computation .

اي التنجز النها كطبقة التعد االدخال وحدات الطبقات عدد تحديد فيحساب

The number of layers in the net can be defined to be the number of layers of weighted interconnected links between the slabs of neurons

خطوط طبقات بعدد يعرف ان يمكن الشبكة في الطبقات عددالشكل مثال العصبية الخاليا كتل بين الموزونة الداخلية االرتباطات

طبقتين من يتألف السابق

Page 5: Simple Neural Nets for Pattern Classification

Single layer net Has one layer of connection weights

االرتباطات اوزان من واحدة طبقة تملك The units can be distinguished as input units which receive signals from

the outside world and output units from which the response of the net can be read

من االشارات تستقبل التي ادخال كوحدات توصف ان يمكن الوحداتتقرأه ان يمكن الذي الشبكة استجابة وهو اخراج ووحدات الخارجي العالم In typical single layer net shown in figure the input units are fully

connected to output units but are not connected to other input units and output units are not connected to other output units

وحدات التالي بالشكل المبينة النموذجية الطبقة وحيدة الشبكة فيبوحدات التتصل ولكنها االخراج بوحدات كامل بشكل ترتبط االدخال

اخرى اخراج بوحدات الترتبط االخراج ووحدات اخرى ادخال

Page 6: Simple Neural Nets for Pattern Classification

Xi

Xn

X1

yj

y1

ym

Input units

output units

One layer of weights

w11

wi1

w1j

wij

wnjw1m

wim

wnm

wn1

A Single layer net

Page 7: Simple Neural Nets for Pattern Classification

Single layer net For pattern classification each output unit corresponds to

a particular category معين صنف تقابل اخراج وحدة كل التصنيف لنمط بالنسبة

Note : in single layer net the weight for one output unit do not influence the weights for other output units

: اخراج وحدة اي وزن الطبقة وحيدة الشبكات في مالحظةاخرى اخراج وحدات اوزان على التؤثر

For pattern association the same architecture can be used but now the overall pattern of output signals gives the response pattern associated with the input signal that caused it to be produced

استخدامها يمكن الهيكلية نفس االرتباط لنمط بالنسبةنمط تعطي الخرج الشارات االنماط جميع االن ولكن

انتاجها سبب كانت التي الدخل باشارة مرتبط استجابة

Page 8: Simple Neural Nets for Pattern Classification

Multilayer net Net with one or more layers or levels of nodes )called

hidden units( between the input units and the output units الوحدات تسمى Jوالتي العقد من اكثر او بطبقة شبكة هي

الخرج ووحدات الدخل وحدات بين المختفية Can solve more complicated problems than can single

layer nets وحيدة الشبكات من اكثر معقدة مشاكل تحل ان بامكانها

الطبقة Training more difficult but some cases more successful

نجاحا اكثر الحاالت بعض في ولكن فيها تعقيدا اكثر التدريب

Page 9: Simple Neural Nets for Pattern Classification

Z1

Z2

Zp

n

y1

y2

y3

ym

v11

v21vn1

v12

v22

vn2

vnp

v2p v1p

w11

w21 wp1

wp2

w12

w22

w13w13

wp3

wpm

w2m w1m

A multilayer neural net

Page 10: Simple Neural Nets for Pattern Classification

Setting the weights The method of setting the values of the weights )training( is an

important distinguishing characteristic of different neural nets ) ( لمختلف هامة مميزة تعتبرخاصية التدريب االوزان قيم اعداد طريقة

العصبية الشبكات Two types of training

Supervised)مشرف ( Unsupervised) مشرف )غير

There are nets whose weights are fixed without an iterative training process

تكرارية تدريب عملية بدون ثابتة اوزانها الشبكات من نوع هناك There is a useful correspondence between the type of training that is

appropriate and the type of problem we wish to solve التي المشكلة ونوع المالئم التدريب نوع بين مفيدة مراسالت هناك

حلها نريد

Page 11: Simple Neural Nets for Pattern Classification

Supervised training Training is accomplished by presenting a sequence of training vectors or

patterns each with an associated target output vector . The weights are then adjusted according to a learning algorithm. This process is known as Supervised training

الناتج مع االنماط او الموجهة التدريبات من سلسلة بتقديم يكتمل التدريبتسمى . العملية هذه التعليم لخوارزمية طبقا تضبط بعدها االوزان المرتبط

عليه المشرف بالتدريب In pattern classification the output is an elements say 1 )if input vector

belongs to the category (or -1 )if dose not belong( يكون الخرج االنماط تصنيف ويساوي -1في لصنف يعود الدخل كان اذا 1اذا

لصنف اليعود كان In pattern association if the desired output vector is the same as the input

vector the net is an autoassociative memory ذاكرة تكون الشبكة الدخل نفس المطلوب الخرج كان إذا المرتبط النمط في

آلي ترابط

Page 12: Simple Neural Nets for Pattern Classification

Supervised training If the output target vector is different from the input vector the net is

a heteroassociative memory ذاكرة هي الشبكة الدخل عن يختلف الهدف الخرج كان إذا

After training an associative memory can recall a stored pattern when it is given an input vector that is sufficiently similar to a vector it has learned

عندما المخزنة االنماط استرجاع يمكنها الترابطية الذاكرة التدريب بعدتعلمته التي للمدخل مشابه مدخل تعطى

The single layer nets )pattern classification nets (and )pattern association nets ( use Supervised training

( االنماط ) ( شبكات و المصنفة االنماط شبكات الوحيدة الطبقة شبكةالتدريب ( هذا تستخدم المرتبطة

Page 13: Simple Neural Nets for Pattern Classification

Unsupervised training Self organizing neural nets group similar input vector together

without the use of training data to specify what a typical member of each group

بدون بعض مع متشابهة مدخالت تجمع التنظيم ذاتية العصبية الشبكاتمجموعة لكل النموذجي العنصر ماهو لتحديد البيانات تدريب استخدام A sequence of input vectors is provided but no target vectors are

assigned بها مرتبط موجه هدف اليوجد ولكن تجهز الموجهة المدخالت سلسلة

The net modifies the weights so that the most similar input vectors are assigned to the same output )or cluster( unit

المتشابهة الموجهة المدخالت معظم لذلك االوزان تعدل الشبكةتجتمع او الخرج وحدة بنفس ترتبط

The neural net will produce an exemplar )representative( vector for each cluster formed

) ( مجمع شكل لكل تمثيلي نموذج تنتج سوف العصبية الشبكة

Page 14: Simple Neural Nets for Pattern Classification

Fixed weight nets The weight are set to represent the constraints and

the quantity to be maximized or minimized او ستزيدان اللتان والكمية القيود لتمثيل يوضع الوزن

تقالن

Fixed weights are also used in contrast enhancing nets

المقارنة تحسين شبكات في الشبكات هذه تستخدم

Page 15: Simple Neural Nets for Pattern Classification

Summary of notation Xi,yj activations of units Xi,Yj , respectively :

for input units Xi,xi = input signalfor other units Yj yj= f )y-inj(

Wij weight on connection from unit Xi to unit Yj

المدخله الخليه بين لإلتصال المخرجه Xiالوزن والخليهyj

beware : some authors use the opposite convention with wji denoting the weight from unit Yj to unit Xi

jb Bias on unit Yj) الخرج لخلية ) االنحرافA bias acts like a weight on a connection from a unit with a constant Activation of 1 )figure 1.11(

ثابتة تنشيط بوصلة الوحدة من االتصال في كالوزن يتصرف االنحرافالواحد تساوي

Page 16: Simple Neural Nets for Pattern Classification

Summary of notation

wjiny _

ijw

Net input to unit Yj:jiny _

= bi + Ʃ xi wij i

weight matrix:

W} {=

ijw from unit i to unit j i: row index; j: column index

j Threshold for Activation of neuron Yj :

Page 17: Simple Neural Nets for Pattern Classification

Summary of notation

vectorinput (......,)vectorputtarget(out)or training(......,)

vectorinputtraining(.......)

21

2

2,1

n

m

n

xxxxtttt

ssss

ijw ijwChange in

]()()[ oldwnewww ijijij

ratelearning :

The learning rate is used to control the amount of weight adjustment at each step of training

الوزن بكمية للتحكم التدريب يستخدم من خطوة كل في المضبوطة

t تنهي أن بعد تخرجها أن المفترJض من التي المخرJجه اإلشارات مجموعه أي الهدف المخJرجات مجموعه عن عبارةالتعليم ..مرحلة

للشبكه المدخالت مجموعه عن عبارهS مرحلة في اإلشارات من للشبكه المدخالت مجموعه وهي التعليم مجموعه عن عباره

التعليم

الوزن في التغير عن عباره wijوهو

معدل التعليم

Page 18: Simple Neural Nets for Pattern Classification

Matrix multiplication method for calculating net input

الشبكة مدخالت لحساب المصفوفة ضرب طريقة

If the connection weights for a neural net are stored in a matrix W={ }, the net input to unit Yj )with no bias on unit j( is simply the dot product of the vectors x=)x1, x2, ……, xn( and ijw

ijw

Ʃ xi wijjiny _ =i

Page 19: Simple Neural Nets for Pattern Classification

Bias االنحراف

A bias can be included by adding a component x0 = 1 to the vector x , i.e. )1,x1,………,xi, …….,xn(

The bias is treated exactly like any other weight i.e w0j = bj The net input to unit Yj is given by

jiny _ Ʃ xi wij=i=0

n

Ʃ xi wiji=1

nw0j+ =

= bj + Ʃ xi wiji=1

n

Page 20: Simple Neural Nets for Pattern Classification

Architecture The basic architecture of the simplest possible neural

networks that perform pattern classification consists of a layer of input units and a single output unit.

الممكنة العصبية الشبكات ألبسط األساسية الهيكليةوحدات طبقة من تتألف النمط تصنيف في الممثلة

للخرج وحيدة وطبقة الدخل

Page 21: Simple Neural Nets for Pattern Classification

Neuron with a bias

Input units

outputunit

Page 22: Simple Neural Nets for Pattern Classification

Biases and Thresholds A bias acts exactly as a weight on a connection from a

unit whose activation is always 1.increasing the bias increase the net input to the unit . If a bias is included , the activation function is typically taken to be

الوحدة من االتصال في كوزن تماما االنحراف يتصرفدائما تغعيله تابع 1حيث

F(net)=1

-1

If net ≥ o

If net < o Where

net = b + Ʃ xi wii

Page 23: Simple Neural Nets for Pattern Classification

Biases and Thresholds Some authors do not use a bias weight , but instead use a fixed threshold For the activation function in that case

العتبة حد منه بدال ويستخدموا االنحراف وزن اليستخدموا المؤلفين بعض: يلي كما الحالة لهذه التفعيل تابع يكون

F(net)=

1

-1

If net≥

If net<

Where net = Ʃ xi wi

االنحراف استخدام تماما تكافئ وهي

Page 24: Simple Neural Nets for Pattern Classification

Linear Separability It a linear decision boundary exist that correctly classifies

all the training samples . بدقة التدريب عينات كل يصنف خطي قرار حد وجود هو

The samples are said to be linearly separable خطي بشكل مفصولة العينات إن يقال

The linear decision boundary is given by the equation:b + x1.w1+x2.w2 = 0

x2 =

-w1

w2. X1-

b

w2w2≠0

Page 25: Simple Neural Nets for Pattern Classification

Linear Separability Problems that are not linear separable are sometimes

referred to as nonlinearly separable or topologically complex.

لها يشJار خطي بشكل مفصولة التكون التي المشاكلطوبولوجي مركب أو خطيا غير كمفصولة

If the n training samples are linearly separable by linear decision boundary ,a single-layer feedforward network is capable of correctly classifying the samples .

Page 26: Simple Neural Nets for Pattern Classification

Linear Separability Examples :

Logic functions and decision regions

0 1

00

)1,0()0,0(

)0,1( )1,1(

AND

1

1

1

0

)1,1(

)1,0()0,0(

)0,1(

OR XOR

)1,0()0,0(

)1,1()0,1(

1

1

0

0

Page 27: Simple Neural Nets for Pattern Classification

Linear Separability Referring to figure we observe that some

common logic functions OR and AND , are linearly separable and XOR is not linearly separable .

التوابع بعض ان نالحJظ السابق الشكل منخطيا ) مفصولة ( XORاما( )AND , ORالمنطقية

خطيا مفصولة غير

Page 28: Simple Neural Nets for Pattern Classification

Learning Rule( التعليم ) قاعدة Associated with each neural network is a learning

rule ,which changes the input functions . عصبية شبكة بكل بتوابع ترتبط تغير تعليم قاعدة

الدخل Normally the learning rule defines how to change

the weights in response to given input / output pairs في االوزان تتغير كيف التعليم قاعدة تعرف عادة

مخرج او معطى لمدخل استجابة

Page 29: Simple Neural Nets for Pattern Classification

Hebb’s Rule More than 50 years ago ,Donald O Hebb theorized that

biological associative memory lies in the synaptic connections between nerve cells .

Jالترابطية الذاكرة ان هيب العالم فسJر سنة خمسين من Jالعصبية الخاليا بين Jالعصبية JالتJالوص في تكمن الحيوية

He thought that the process of learning and memory storage involved changes in the strength with which nerve signals are transmitted across individual synapses .

في التغيير يتضمن الذاكرة وتخزين التعليم Jعملية بأن فكرفرJدية عصبية وصالت عبر Jالمرسلة العصبية االشارات شدة

Page 30: Simple Neural Nets for Pattern Classification

Hebb’s Rule Hebb’s Rule states that pairs of neurons which are

active simultaneously becomes stronger by weight changes.

النشيطة العصبية الخاليا ازواج ان حددت هيب قاعدةالوزن بتغير اقوى اني بشكل تصبح

Hebb’s Rule : the learning rule that states that the synaptic weight changes are proportional to the product of the synaptic activities of both the sending and receiving cells:

العصبة الوصلة وزن تغير ان حددت التي التعليم قاعدةالمرسلة الخاليا لكل العصبية الوصالت لنشاطات نسبي

والمستقبلة

Page 31: Simple Neural Nets for Pattern Classification

Hebb’s Rule

ijw iX= . . jY

Where is the learning rate is the input is the output of the receiving neuron

iX

jY

Page 32: Simple Neural Nets for Pattern Classification

Algorithm :Hebb net (supervised) learning algorithm

Step0 . initialize weights and bias Step1 . per each training sample s:t do steps 2 – 4 Step2 . Set activations of input units :

xi = si STEP3 . Set activations of output units :

y = t Step4. update weight and bias :

wi )new( = wi)old( + xi. yb )new( = b )old( + y

Page 33: Simple Neural Nets for Pattern Classification

Application Bias types of input are not explicitly used in the original

formulation of Hebb learning . الصيغة في واضح بشكل تستخدم لم للدخل االنحرافات انواع

هيب لتعليم االصلية Logic functions Examples: a Hebb net for the AND function : binary inputs

and target

(x1, x2, 1) (1, 1, 1) 1 (1, 0, 1) 0(0, 1, 1) 0(0, 0, 1) 0

bias unit

y=targetinput

Page 34: Simple Neural Nets for Pattern Classification

Logic functions

For each training input :target , the weight change is the product of the input vector and the target value i.e.

1w = x1t 2w x2t= b = t

The new weights are the sum of the previous weights and the weight change . Only one iteration through the training vectors is required . The weight updates for the first input are as follows :

(x1, x2, 1) ( ) (w1 w2 b) (0 0 0)

(1, 1, 1) 1 (1 1 1 ) (1 1 1)

input Target 1w 2w b

Weight changes Weights

Page 35: Simple Neural Nets for Pattern Classification

Logic functions

The separating line becomes x2 = - x1 – 1

The graph presented in figure 2.7 shows that the response of the net will now be correct for the first input pattern . Presenting the second ,third and fourth training input shows that because the target value is 0 , no learning occurs . Thus using binary target values prevents the net from learning any pattern for which the target is “ off” :

Page 36: Simple Neural Nets for Pattern Classification

Figure 2.7 decision boundary for binary AND function using Hebb rule after first training pair

x2

x10 0

0 +

Page 37: Simple Neural Nets for Pattern Classification

Logic functions

)x1, x2, 1( )Δw1 Δw2 Δb( )w1 w2 b()1, 0, 1( 0 )0 0 0 ( )1 1 1(

)0, 1, 1( 0 )0 0 0( )1 1 1(

)0, 0, 1( 0 )0 0 0( )1 1 1(

input Target Weight changes Weights

Page 38: Simple Neural Nets for Pattern Classification

Logic functions

Example 2.6 Hebb net for the AND function : binary inputs, bipolar targets

(x1, x2, 1) (1, 1, 1) 1 (1, 0, 1) -1(0, 1, 1) -1(0, 0, 1) -1

input Target

Page 39: Simple Neural Nets for Pattern Classification

Logic functions

Presenting the first input including a value of 1 for the third component yields the following:

(x1, x2, 1) ( ) (w1 w2 b) (0 0 0)

(1, 1, 1) 1 (1 1 1 ) (1 1 1)

input Target 1w 2w b

Weight changes Weights

The separating line becomes

x2 = - x1 – 1

Page 40: Simple Neural Nets for Pattern Classification

Logic functions

Figure 2.8 shows the response of the net will now be correct for the first input pattern.

Presenting the second ,third and fourth training patterns shows that learning continues for each of these patterns )since the target value is now -1 rather than 0 as in example 2.5(

Page 41: Simple Neural Nets for Pattern Classification

Figure 2.7 decision boundary for AND function using Hebb rule after first training pair )binary inputs , bipolar targets (

x2

x1- -

- +

Page 42: Simple Neural Nets for Pattern Classification

Logic functions

)x1, x2, 1( )Δw1 Δw2 Δb( )w1 w2 b(

)1, 0, 1( -1 )-1 0 -1 ( )0 1 0(

)0, 1, 1( -1 )0 -1 -1( )0 0 -1(

)0, 0, 1( -1 )0 0 -1( )0 0 -2(

input Target Weight changes Weights

However these weights do not provide the correct response for the first input pattern

Page 43: Simple Neural Nets for Pattern Classification

Logic functions The choice of training patterns can play a significant

role in determining which problems can be solved using the Hebb rule .

في هام دور يلعب ان يمكن االنماط لتدريب االختيارهيب قاعدة باستخدام حلها يمكن المشاكل اي تقرير

The next example shows that the AND function can be solved if we modify its representation to express the inputs as well as the targets in bipolar form

تابع ان يظهر التالي عدلنا ANDالمثال اذا يحل ان يمكنقطبي شكل في الهدف مثل المدخالت الظهار تمثيله

Page 44: Simple Neural Nets for Pattern Classification

Logic functions Bipolar representation of the inputs and targets allows

modification of a weight when the input unit and target value are both “on” at the same time and when they are both “of” at the same time .

الوزن بتعديل يسمح واالهداف للمدخالت القطبي التمثيلكالهما والهدف الدخل وحدة قيم تكون نفس onعندما في

يكونوا وعندما الوقت offالوقت نفس في The algorithm is the same as that just given expect that

now all units will learn whenever there is an error in the output

كل ان االن نتوقع قبل من المعطاة نفسها هي الخوارزميةالناتج في خطأ هناك يكون عندما تتعلم سوف الوحدات

Page 45: Simple Neural Nets for Pattern Classification

Logic functions

Example 2.7 Hebb net for the AND function : bipolar inputs and targets

(x1, x2, 1) (1, 1, 1) 1 (1, -1, 1) -1(-1, 1, 1) -1(-1, -1, 1) -1

input Target

Page 46: Simple Neural Nets for Pattern Classification

Logic functions

Presenting the first input including a value of 1 for the third component yields the following:

(x1, x2, 1) ( ) (w1 w2 b) (0 0 0)

(1, 1, 1) 1 (1 1 1 ) (1 1 1)

input Target 1w 2w b

Weight changes Weights

The separating line becomes

x2 = - x1 – 1

Page 47: Simple Neural Nets for Pattern Classification

Logic functions

Figure 2.9 shows the response of the net will now be correct for the first input point )and also by the way for the input point )-1,-1( ( . Presenting the second input vector and target results in the following situation:

Page 48: Simple Neural Nets for Pattern Classification

Figure 2.9 decision boundary for AND function using Hebb rule after first training pair )bipolar inputs , bipolar targets (

x2

x1

-

+-

-

Page 49: Simple Neural Nets for Pattern Classification

Logic functions

(x1, x2, 1) ( ) (w1 w2 b) (1 1 1)

(1, -1, 1) -1 (-1 1 -1 ) (0 2 0)

input Target 1w 2w b

Weight changes Weights

The separating line becomes

x2 = 0

Page 50: Simple Neural Nets for Pattern Classification

Logic functions

Figure 2.10 shows the response of the net will now be correct for the first two input point )1,1( and )1,-1( and also for input point )-1,-1(

Presenting the third input vector and target yields the following :

Page 51: Simple Neural Nets for Pattern Classification

Figure 2.10 decision boundary for bipolar AND function using Hebb rule after second training pattern )boundary is x1 axis (

x2

x1

-

+-

-

Page 52: Simple Neural Nets for Pattern Classification

Logic functions

(x1, x2, 1) ( ) (w1 w2 b) (0 2 0)

(-1, 1, 1) -1 (1 -1 -1 ) (1 1 -1)

input Target 1w 2w b

Weight changes Weights

The separating line becomes

x2 = -x1 +1

Page 53: Simple Neural Nets for Pattern Classification

Logic functions

Figure 2.11 shows the response of the net will now be correct for the first three input point )and also by the way for the input point )-1,-1( ( . Presenting the last point. we obtain the following:

Page 54: Simple Neural Nets for Pattern Classification

Figure 2.11 decision boundary for bipolar AND function using Hebb rule after third training pattern

x2

x1

-

+-

-

Page 55: Simple Neural Nets for Pattern Classification

Logic functions

(x1, x2, 1) ( ) (w1 w2 b) (1 1 -1)

(-1, -1, 1) -1 (1 1 -1 ) (2 2 -2)

input Target 1w 2w b

Weight changes Weights

Even though the weights have changed The separating line is still :

x2 = -x1 +1So the graph of the decision regions )the positive response and the negative response( remains as in figure 2.11

Page 56: Simple Neural Nets for Pattern Classification

Character recognitionاالحرف تمييز

Example 2.8 A Hebb net to classify two –dimensional input patterns )representing letters (

A simple example of using the Hebb rule for character recognition involves training the net to distinguish between the pattern “X” and the pattern “O” . The pattern can be represented as

# . . . # . # # # .. # . # . # . . . #. . # . . # . . . ## . . . # . # # # .

Pattern 1 Pattern 2

and

Page 57: Simple Neural Nets for Pattern Classification

Character recognition To treat this example as a pattern classification

problem with one output class we will designate the class “X” and take the pattern “O” to be an example of output that is not “X” .

Page 58: Simple Neural Nets for Pattern Classification

Character recognition The first thing we need to do is to convert the

patterns to input vectors . That is easy to do by assigning each # the value 1 and each “.” the value -1 . To convert form the two dimensional pattern to an input vector we simply concatenate the rows , i.e. the second row of the pattern comes after the first row , the third row follows , ect .

Page 59: Simple Neural Nets for Pattern Classification

Character recognition Pattern 1 then becomes 1-1-1-11,-1 1 -1 1 -1 , -1 -1 1 -1 -1 ,-1

1 -1 1 -1 , 1 -1 -1 -1 1 and pattern 2 becomes

-1 1 1 1 -1 , 1 -1 -1 -1 1 , 1 -1 -1 -1 1, 1 -1 -1 -1 1 , -1 1 1 1 -1

Page 60: Simple Neural Nets for Pattern Classification

Character recognition The correct response for the first pattern is “on” or +1 , so the

weights after presenting the first pattern are simply the input pattern .

هي االول للنمط الصحيحة اليتغير 1يعني onاالستجابة الوزن وبالتالي The bias weight after presenting this is +1 . The correct response for the second pattern is “off” or -1 . So the

weight change when the second pattern is presented is هي الثاني للنمط الصحيحة يتغير 1-أي offاالستجابة الوزن لذلك

الثاني 1بضرب - بالنمط1 -1 -1 -1 1 ,-1 1 1 1 -1 , -1 1 1 1 -1 , -1 1 1 1 -1,1 -1 -1 -1 1 In addition the weight change for the bias weight is -1.

Page 61: Simple Neural Nets for Pattern Classification

Character recognition Adding the weight change to the weights representing the

first pattern gives the final weights :2 -2 -2 -2 2 , -2 2 0 2 -2 , -2 0 2 0 -2 , -2 2 0 2 -2 , 2 -2 -2 -2 2

The bias weight is 0 We compute the output of the net for each the training patterns The net input is the dot product of the input pattern with the weight vector For the first training vector the net inputs is 42 so the response is positive

as desired For the second training pattern the net input is -42 so the response is

clearly negative also as desired

Page 62: Simple Neural Nets for Pattern Classification

Character recognition However the net can also give reasonable responses to

input that are similar but not identical , to the training patterns

معقولة استجابات تعطي ان يمكن الشبكة األحوال بكلالتدريب Jالنماط مماثل Jليس ولكن مشابه لدخل

There are two types of changes can be made to one of the input patterns that will generate a new input pattern for which it is reasonable to expect a response

ادخاالت الحد تصنع ان يمكن التغييرات من نوعين هناكسيكون جديد نموذج دخل تولد سوف التي النماذج

االستجابة لتوقع معقول

Page 63: Simple Neural Nets for Pattern Classification

Character recognition The first type of change is usually referred to as “mistakes in the data

“. In this case one or more components of the input vector )corresponding to one or more pixels in the original pattern( have had their sign reversed, changing a 1 to -1 or vice versa

الحالة هذه في البيانات في بالخطأ له يشار عادة التغيير من االول النوعيتغير اشاراتهم تعكس الدخل مكونات من اكثر او او 1الى -1واحد

بالعكس العكس The second type of change is called “missing data” . In this situation

one or more of the components of the input vector have the value 0 rather than 1 or -1

من اكثر او واحد الحالة هذه في مفقودة بيانات يسمى الثاني النوعقيمة تملك الدخل من 0مكونات 1او -1اكثر

In general a net can handle more missing components than wrong components in other words

ان يمكن الشبكة عام مكونات بشكل من اكثر مفقودة مكونات تعالجخاطئة