Bandara2011IJAIT Handwriting

32
DOI: 10.1142/S021821301100022X International Journal on Articial Intelligence Tools Vol. 20, No. 3 (2011) 425–455 c World Scientic Publishing Company A CUSTOMIZABLE FUZZY SYSTEM FOR OFFLINE HANDWRITTEN CHARACTER RECOGNITION RUKSHAN BATUWITA Oxford University, Computing Lab or atory Wolfson Building, Parks Road, Oxford, OX1 3QD, UK [email protected] VASILE PALADE Oxford University, Computing Lab or atory Wolfson Building, Parks Road, Oxford, OX1 3QD, UK [email protected] DHARMAPRIYA C. BANDARA Australian Center for Field Robotics, J04, Ross Street Building University of Sydney, Sydney NSW 2006, Australia [email protected] Received 29 April 2009 Accepted 28 September 2010 425 Automated offline handwritten character recognition involves the development of computational methods that can generate descriptions of the handwritten objects from scanned digital images. This is a challenging computational task, due to the vast impreciseness associated with the handwritten patterns of different individuals. Therefore, to be successful, any solution should employ techniques that can effectively handle this imprecise knowledge. Fuzzy Logic, with its ability to deal with the impreciseness arisen due to lack of knowledge, could be successfully used to develop automated systems for handwritten character recognition. This paper presents an approach towards the development of a customizable fuzzy system for offline handwritten character recognition. Keywords: Fuzzy systems; character recognition; character segmentation; adaptability. 1. Introduction Computerized character recognition has been an intensive and challenging research in the area of computer vision for many years. Such automated character recognition systems provide a solution for processing large volumes of data automatically. Automated character recognition could be broadly categorized into two sub-fields: Optical Character Recognition (OCR) and Handwritten Character Recognition (HCR). OCR deals with the

Transcript of Bandara2011IJAIT Handwriting

Page 1: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 1/31

DOI: 10.1142/S021821301100022X

International Journal on Artificial Intelligence ToolsVol. 20, No. 3 (2011) 425–455c World Scientific Publishing Company

A CUSTOMIZABLE FUZZY SYSTEM FOR OFFLINE

HANDWRITTEN CHARACTER RECOGNITION

RUKSHAN BATUWITA

Oxford University, Computing Laboratory 

Wolfson Building, Parks Road, Oxford, OX1 3QD, UK 

[email protected] 

VASILE PALADE

Oxford University, Computing Laboratory 

Wolfson Building, Parks Road, Oxford, OX1 3QD, UK 

[email protected] 

DHARMAPRIYA C. BANDARA

Australian Center for Field Robotics, J04, Ross Street Building 

University of Sydney, Sydney NSW 2006, Australia 

[email protected] 

Received 29 April 2009

Accepted 28 September 2010

425

Automated offline handwritten character recognition involves the development of computational

methods that can generate descriptions of the handwritten objects from scanned digital images.

This is a challenging computational task, due to the vast impreciseness associated with thehandwritten patterns of different individuals. Therefore, to be successful, any solution should

employ techniques that can effectively handle this imprecise knowledge. Fuzzy Logic, with its

ability to deal with the impreciseness arisen due to lack of knowledge, could be successfully used to

develop automated systems for handwritten character recognition. This paper presents an approach

towards the development of a customizable fuzzy system for offline handwritten character

recognition.

Keywords: Fuzzy systems; character recognition; character segmentation; adaptability.

1. Introduction

Computerized character recognition has been an intensive and challenging research in thearea of computer vision for many years. Such automated character recognition systems

provide a solution for processing large volumes of data automatically. Automated

character recognition could be broadly categorized into two sub-fields: Optical Character

Recognition (OCR) and Handwritten Character Recognition (HCR). OCR deals with the

Page 2: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 2/31

426   R. Batuwita, V. Palade & D. C. Bandara 

recognition of machine printed characters, while HCR deals with the recognition of 

handwritten characters. HCR can be further divided into two sub-fields, namely, online

HCR and offline HCR. Online HCR involves the identification of character patterns

while they are being written, as is the case with a personal digital assistant. This deals

with the processing of time-ordered sequences of data, pen up and down movements, and

signals from pressure-sensitive pads that can record the pressure and velocity of the

pen.1,2 On the other hand, offline HCR, what we discuss in this paper, involves the

recognition of already written character patterns in scanned digital images, and deals withtechniques from computational image processing. Offline HCR has numerous

applications, such as address and zip code recognition, writer identification, automatic

check clearing, airline ticket and passport reading, etc.

HCR is a complex computational problem mainly due to the vast impreciseness

associated with different handwriting styles of different individuals. Hence, the

conventional computational and image processing techniques alone are not adequate to

develop successful automated solution for this problem. In the last two decades, various

computational intelligence/machine learning techniques have been applied to develop

systems for both online and offline HCR. These methods include Artificial Neural

Networks (ANNs),1,3–6

Support Vector Machines,7,8

Hidden Markov Models (HMMs),9,10

 

Gaussian Mixture Models,11 Fuzzy Logic,1,2,12–16 Hybrid methods,17–19 etc.

As stated in (Ref. 13), the main characteristics that a HCR system should possess

are:

•  Flexibility — The system should handle the impreciseness associated with a wide rage

of character patterns of different individuals.

•  Efficiency — Online HCR systems should be very efficient from a computational

point of view.

•  Customizability (Online Adaptability) — It is not possible to develop a system that

has the complete prior knowledge for recognizing all character patterns written by all

users. Therefore, it should have the capability to learn new user-specific handwritten

patterns online, i.e., a user could be able to customize the system for the user’s

handwriting style.

•  Automatic Learning — In order to provide the customizability feature, the system

should be trained using an automatic learning mechanism.

As humans, we are able to find some similarity between different writing styles of thesame character pattern by comparing specific features of it, and then be able to recognize

the character correctly. This similarity can be treated as a “membership value” that lies in

the range of [0%–100%]. The implementation of such a human way of reasoning as

closely as possible into computational models would result in very flexible automated

character recognition systems. In order to do this, we should employ a technique that

represents the impreciseness of different character patterns in terms of precise numeric

values for computation. Fuzzy Logic, which deals with the impreciseness arising due to

Page 3: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 3/31

Fuzzy System for Offline Handwritten Character Recognition 427

the lack of knowledge (incomplete knowledge), could be used to handle this kind of 

vague knowledge representation and reasoning problems.

In this paper, we present an approach aiming to the development of a fuzzy system

for offline HCR, which possesses the above mentioned four characteristics. Since

the knowledge of a fuzzy system is usually represented as a set of linguistic fuzzy

rules, the system would be a flexible one. The computational efficiency of the system

would be high, since the mathematical calculations involved in a fuzzy system are limited

to basic mathematical operations such as, addition, subtraction, maximum, minimum,etc. The requirements of online adaptability and automatic training were achieved by

using a simple automatic rule base generation approach, which has been proposed in

Ref. 16.

The organization of this paper is as follows: Section 2 presents an overview of 

the complete system. Sections 3, 4, and 5 describe the initial preprocessing steps

of the character image, namely, binarization, skeletonization, individual character

isolation, respectively. A novel individual character segmentation algorithm is presented

in detail in Section 6. Section 7 presents the calculations for fuzzy features extraction,

while Section 8 explains the training and inference mechanisms of the system. Results

and discussion are presented in Section 9, and the conclusions and future research

directions are discussed in Section 10.

2. The Proposed System

The block diagram of the proposed system is presented in Fig. 1.

Fig. 1. Block diagram of the proposed system.

Page 4: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 4/31

428   R. Batuwita, V. Palade & D. C. Bandara 

Before calculating the fuzzy features of each character pattern in a digitally scanned

image, the character image should be preprocessed in order to make the feature extraction

more accurate. In the proposed system, an input character image is subjected to four

preprocessing phases, namely, Binarization, Skeletonization, Individual Characters

Isolation, and Individual Character Segmentation. Then, a set of fuzzy features are

extracted from all the segments belonging to each character pattern. Based on these

features, the training step or the classification step is performed. The next sections

describe these different steps of the proposed system in detail.

3. Binarization

Any scanned digital image is usually represented as a collection of pixels having

intensities that vary in the range [0%–100%], which can be represented by integer values

from 0 to 255. As a first step, the image should undergo a binarization process to avoid

information loss and/or noise that could result in the later processing phases. In the

binarization process, if the intensity of a pixel is less than a particular threshold value, it

is set to black (0), otherwise to white (255). The threshold value may change according tothe quality of the scanner being used. In this work, a Hewlett Packard (HP 3670) scanner

with 200 DPI resolution was used, and the image was scanned in jpeg format.

Accordingly, the threshold value was set to 200. A binarized image of a set of 

handwritten characters is shown in Fig. 2.

Fig. 2. A binarized image of a set of handwritten characters.

4. Skeletonization

The binarized image is then subjected to the skeletonization process in which the

skeletons of the individual character patterns are obtained. Character skeletons are more

clear representations of original character patterns than themselves, and hence, can be

used for feature extraction more accurately. We adopted the thinning algorithm presented

in Ref. 22 for this task after comparing its results with other algorithms presented in

the literature.20,21 The results of the skeletonization process obtained by this algorithm22 

are depicted in Fig. 3.

Page 5: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 5/31

Fuzzy System for Offline Handwritten Character Recognition 429

Fig. 3. Skeletonized image of handwritten characters.

5. Isolation of Individual Character Skeletons into Skeleton Areas

The skeletonized character image is then processed for individual character isolation.

Here, we used a simple method to isolate individual character skeletons assuming

that neighboring characters are not connected, but that every single character skeleton isfully connected inside. In this method, first, the rows of the character skeletons were

isolated assuming that the vertical distance between two rows of character skeletons was

0–50 pixels. Then, the character skeletons in each row were isolated assuming that

the horizontal distance between two separate character skeletons in a single row was

0–200 pixels. This process isolates the character skeletons into their skeleton areas.

Definition 1. A skeleton area is a rectangular area in the image which contains a single

character skeleton. A skeleton area is represented in Fig. 4.

Fig. 4. Representation of a skeleton area of the character skeleton ‘B’.

6. Individual Character Segmentation

The most tedious task in this work was the segmentation of individual characters

skeletons into a set of meaningful segments. Individual character segmentation of 

online HCR can benefit from time-ordered sequences of data and pen up and down

movements.2

However, this information is not available to the offline individual charac-

ter segmentation. Therefore, a novel segmentation algorithm for the segmentation

Page 6: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 6/31

Page 7: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 7/31

Fuzzy System for Offline Handwritten Character Recognition 431

The task of segmenting every character pattern from a particular character set into a

set of meaningful segments gets harder due to various writing styles.

6.2. The segmentation algorithm

This section outlines the main routine of the segmentation algorithm and defines few key

words used throughout this section to describe the algorithm in detail.

Definition 2. A starter point  is a pixel point on the character skeleton, with which the

traversal through the skeleton could be started. Starter points are two types: major starter 

 points and minor starter points. 

Definition 3. A major starter point  is a starter point which is identified before starting

the traversal through the skeleton. The identification of major starter points is described

in Section 6.3.

Definition 4. A minor starter point  is a starter point  which is identified during the

traversal through the skeleton. The identification of minor starter points is described in

Section 6.4.1.

Two types of major data structures are used in this algorithm, namely, the Point 

which holds the X and Y coordinate values of a pixel point, and the Segment which is

a Point  array. The main routine of the segmentation algorithm, Segmentation(SA) ,

is presented in Fig. 6. SA represents an input skeleton area of a character to be

segmented. The variables major_starters, m_segments and all_segments represents a

queue of  major starter points, an array of segments identified by traversing through a

particular major starter point, and an array of all segments identified, respectively.

Fig. 6. Main routine of the segmentation algorithm.

Page 8: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 8/31

432   R. Batuwita, V. Palade & D. C. Bandara 

The algorithm starts identifying all the major starter points in the input skeleton area

(line 2). Then the skeleton is traversed starting from each major starter point  ( MJSP). 

While traversing, the segments are identified (lines 3–6). The traversal routine is

described in Section 6.4.

6.3.  Identification of major starter points

In order to identify the major starter points, two techniques can be used. In both of thesetechniques, the pixels in the given skeleton area are processed row-wise starting from the

pixel (Xmin, Ymin) (Fig. 4). In Technique 1, all the pixel points on the character skeleton

having only one neighboring pixel are selected as major starter points. Here, it was

assumed that every character skeleton was in one pixel thickness. As an example, the

skeleton pattern ‘B’ depicted in Fig. 7(i) has three major starter points, namely, ‘a’, ‘b’

and ‘c’, which can be identified using Technique 1.

The major starter points of some skeleton patterns (such as of character ‘O’ and

number zero) cannot be obtained by the Technique 1, since all the points in such a one

pixel thickness, closed, ‘O’-like curve (Fig. 7(ii)) would have at least two neighboring

pixels. In such a case, the first pixel which is found on the character skeleton is taken as

its major starter point . Most of the time, this one and only major starter point would be

the highest most pixel point to be found in the skeleton (the pixel point ‘a’ in Fig. 7((ii)),

since the skeleton area is processed row-wise from top to bottom.

Fig. 7. Major starter point of character skeletons ‘B’ and ‘O’.

6.4. Traversal through the character skeleton

Definition 5. The current traversal direction is the direction from the current pixel to the

next pixel to be visited during the traversal. The determination of  current traversaldirection is described in Section 6.4.2.

Definition 6. The written direction of a sequence of pixels is the direction to which they

have been written. The calculation of written direction is explained in Section 6.5.

The traversal routine, traverse( MJSP), is presented in Fig. 8. The variable

current_segment  refers to a Segment  to store the points of the current segment during

the traversal, current_direction is a string to store the current traversal direction,

Page 9: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 9/31

Fuzzy System for Offline Handwritten Character Recognition 433

Function: traverse(MJSP)

1.  Vars: minor_starters = empty, current_segment = empty, current_direction = empty,

current_point = empty, next_point = empty, neighbors = empty, segments = empty

2.  Enque MJSP to minor_starters

3.  While (there are more points in the minor_starters or current_point is not empty) Do 

4.  If (current_point = empty) then

5.  current_point = Deque from minor_starters

6.  Initialize the current segment

7.  If (current_point is unvisited) then8.  add current_point to current_segment

9.  mark the current_point as visited

10.  End If 

11.  End If 

12.  If (unvisited adjacent neighbor of current_point exists) then

13.  neighbors = get all unvisited adjacent neighbors of current_point.

14.  If (no. of points in current_segment > 1) then

15.  If (an unvisited neighbor in the current_direction exist) then

16.  next_point = get that neighbor in the current_direction.

17.  enque all other unvisited neighbors into

minor_starters queue. (Section 6.4.1)

18.  current_point = next_point

19.  current_segment.add(current_point)

20.  mark current_point as visited

21.  Else ( I.e. the traversal direction changes)

22.  tmp_segment = get next 5 pixels in the path(Fig.9)

23.  If (IsAbruptChange(current_segment, tmp_segment)) then

24.  segments.add(current_segment)

25.  current_segment = tmp_segment

26.  Else  // the traversal can continue with the same segment.

27.  add all the points in the tmp_segment to current_segment

28.  End If 

29.  End If 

30.  End If 

31.  Else  (number of points in the current_segment is 1)

32.  next_point = choose any neighbor of the current_point (Section 6.4.4)

33.  current_direction = get the current traversal direction (Section 6.4.2)

34.  current_point = next_point

35.  current_segment.add(current_point)36.  mark current_point as visited

37.  End If 

38.  Else (if there are no unvisited neighbors to visit)

39.  segments.add(current_segment)

40.  current_point = empty

41.  End If 

42.  End While

43.  return segments

Fig. 8. The traversal routine of the segmentation algorithm.

Page 10: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 10/31

434   R. Batuwita, V. Palade & D. C. Bandara 

current_point  refers to the current pixel point, next_point  refers to the next pixel to

be visited, neighbors is a Point  array to hold all the unvisited neighboring points

of the current pixel point, and segments is a Segment  array to store the identified

segments.

After finding all the major starter points, the algorithm starts traversing through the

character skeleton, starting from the major starter point  which has been found first.

During this traversal the segments are identified in the traversal path. The minor starter 

 points are also identified at each junction of the skeleton, and queued to a different queue,which is hereafter referred to as minor_starters (Identification of minor starter points is

described in Section 6.4.1). Once the traversal reaches an end point, which is a pixel

point that there is no neighboring pixel to visit next, the focus is shifted to the identified

minor starter points in the minor_starters queue. Then the algorithm starts traversing the

unvisited paths of the skeleton by starting with each minor starter point  in the

minor_starters queue. During these traversals, the algorithm also segments the path being

visited into meaningful segments. The segmentation decision is based on the abrupt

change in the written direction, which is inspired by the online character segmentation

algorithm presented in Ref. 2. That is, as long as the current traversal direction remains

unchanged (if an unvisited neighboring pixel in the current traversal direction can be

found), the algorithm considers the path being visited as belongs to the same segment. If 

the current traversal direction changes, then the algorithm checks for an abrupt change in

written direction. If there is an abrupt change, from that point onwards a new segment is

stared. Otherwise, the traversal continues with the same segment.

This traversal routine is repeated with all the unvisited major starter points in the

major_starters queue until all the unvisited paths in the skeleton area are visited. The

risk of revisiting an already visited path during traversals is eliminated by memorizingall the visited pixel points. Therefore, it is guaranteed each segment is identified only

once.

It was found that in order to detect a major change in written direction, the written

direction of a sequence of at least five pixels should be examined. The next five pixel

points in the path can be extracted as shown in Fig. 9.

6.4.1.  Identification of minor starter points

Let us consider the traversal through the character skeleton ‘B’ in Fig. 10. The traversalstarts with the major starter point ‘a’ and continues to the junction ‘J1’. At that junction,

the current pixel point has two unvisited neighbors. Since there is a neighboring pixel

into the current traversal direction, the algorithm chooses that pixel (n1) between the two

neighboring pixels as the next pixel point to visit. It is clear that the other neighboring

pixel (n2) is a starter point  of another path in the skeleton. Therefore, the point n2 is

identified as a minor starter point  and inserted into the minor_starters queue for later

consideration. In every junction in the skeleton, zero or more unvisited minor starter 

 points are identified.

Page 11: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 11/31

Fuzzy System for Offline Handwritten Character Recognition 435

1.  tmp_segment = empty; array of Point.

2.  While (there are more unvisited adjacent neighbors of current_point exist and the size of 

tmp_segment is < 5) Do 

3.  neighbors = get all unvisited eight adjacent neighbors of the current_point

4.  If (there exists an unvisited neighbor x in the current_direction )

5.  next_point = get that neighbor x  (Section 6.4.2)

6.  Else

7.  If (there exists an unvisited neighbor y in the closest 

traversal direction to the current_direction) then8.  next_point = get the neighbor y (Section 6.4.3)

9.  Else

10.  next_point = get a neighbor in any direction (Section 6.4.4)

11.  End if 

12.  current_direction = get the new traversal direction

13.  End if.

14.  enque other neighbors into the minor_starters queue (Section 6.4.1)

15.  current_point = next_point

16.  mark current_point as visited

17.  tmp_segment.add(current_point)18.  End While

Fig. 9. Getting the next five pixel points in the traversal path.

Fig. 10. (color online) The identification of  minor starter points (Already visited pixels are depicted in black 

color and the unvisited pixels are depicted in ash color).

6.4.2.  Determination of the current traversal direction

Let us consider all the eight adjacent neighbors of current pixel (i, j) given in Fig. 11. The

current traversal direction can be defined as described in Table 2 according to the

neighboring pixel which is chosen as the next pixel to be visited.

6.4.3.  Determination of the closest traversal direction

When the current traversal direction is given, the closest traversal directions to the

current traversal direction can be determined as described in Table 3.

Page 12: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 12/31

436   R. Batuwita, V. Palade & D. C. Bandara 

Fig. 11. Eight adjacent neighbors of the current pixel (i,j).

Table 2. Determination of the current traversal direction.

Neighboring Pixel Chosen as the Next Pixel Point

to be Visited from (i, j)

Current Traversal

Direction

(i, j –1 ) U (UP)

(i+1 , j) R (RIGHT)

(i, j+1 ) D (DOWN)

(i–1 , j) L (LEFT)

(i+ 1 , j–1 ) RU (RIGHT_UP)

(i+ 1 , j+1 ) RD IGHT_DOWN)

(i– 1 , j+ 1) LD LEFT_DOWN)

(i– 1 , j–1 ) LU (LEFT_UP)

Table 3. Determination of the closest traversal direction to the

current traversal direction.

Current Traversal Direction Closest Traversal Directions

U RU, LU

R RU, RD

D RD, LD

L LD, LU

RU U, R

RD R, D

LD D, L

LU L, U

In Table 3, for each current traversal direction, two closest traversal directions are

mentioned. To find one neighbor in the closest traversal direction, the algorithm first

checks the closest traversal direction which would be found first, when considering the

directions clockwise starting from the direction U (Fig. 12). As an example, if the current 

traversal direction is U, the algorithm first checks whether there is an unvisited neighbor

in the direction RU. If it fails to find such a neighbor, then it checks whether there is an

unvisited neighbor in the direction LU.

Page 13: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 13/31

Fuzzy System for Offline Handwritten Character Recognition 437

Fig. 12. Consideration of the directions in clockwise, starting from the direction UP from the current pixel

point (i, j).

6.4.4. Choosing an unvisited neighbor in any direction

To choose an unvisited neighbor in any direction from the current point the unvisited

neighbors are processed starting with the direction U and then in clockwise (Fig. 12).

That is, first, the algorithm checks whether there is an unvisited neighbor in the direction

U to the current point (i, j). If it fails to find such a neighbor, then it checks an unvisited

neighbor in the RU direction to the current point and so on. Therefore, the directions

are considered in the order; U, RU, R, RD, D, LD, L, and finally LU.

6.5.  Determination of abrupt change in written direction

The written direction of a given sequence of pixel points equals to the angle between the

x-axis and the straight line connecting the start and end points of the pixel sequence.

Consider the sequence of points in Fig. 13 in which the start point is (x s, ys) and the end

point is (xe, ye).

Fig. 13. The calculation of the written direction.

After calculating the angle Θ as in (1), the written direction can be determined as

described in Table 4.

Θ = tan–1

(|(ys – ye)/(xs – xe)|) (1)

Page 14: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 14/31

438   R. Batuwita, V. Palade & D. C. Bandara 

Table 4. Calculation of the written direction.

Written Direction

(xs < xe) and (ys <= ye) Θ 

(xs > xe) and (ys <= ye) 180 – Θ 

(xs > xe) and (ys > ye) 180 + Θ 

(xs < xe) and (ys > ye) 360 –Θ

 

(xs = xe) and (ys <= ye) 90

(xs = xe) and (ys > ye) 270

Function: IsAbruptChange(current_segment, tmp_segment) 

1.  If (the size of tmp_segment < 5 OR size of current_segment < 5)

2.  return false

3.  Else

4.  prev_written_d = written direction of the last 5 pixel points in the current_segment.

5.  new_written_d = written direction of the tmp_segment.

6.  difference = |prevs_written_d – new_written_d|

7.  If (difference > 315)

8.  If (prev_written_d > 315 OR curr_written_d > 315)

9.  difference = 360 – difference

10.  End If 

11.  End If 

12.  If (difference > threshold angle)

13.  return true14.  Else

15.  return false

16.  End If 

17.  End If 

Fig. 14. The routine IsAbruptChange.

The routine  IsAbruptChange is described in Fig. 14. If the change in written

direction, that is, the difference between the previous written direction ( prev_written_d )

and the new written direction (new_written_d ), is greater than a particular thresholdangle, it is determined to be an abrupt change.

From the preliminary experiments carried out with this segmentation algorithm we

observed that a single threshold angle could not be used to get the expected segmentation

results for all the characters in a character set. The English uppercase character set was

used to test this segmentation algorithm and the expected segmentations are depicted in

Fig. 15. The observed segmentation problems under a single threshold angle with three

different values are depicted in Table 5. Except for these problems other character

skeletons were segmented as expected.

Page 15: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 15/31

Fuzzy System for Offline Handwritten Character Recognition 439

Fig. 15. The expected segmentations of uppercase English characters.

Table 5. Problems identified in the segmentation. The segments are numbered according

to the order in which they were identified.

TH Angle Problems in Segmentation

40 degrees

60 degrees

45 degrees

As depicted in Table 5, when the threshold angle was equal to 40 degrees, the

characters ‘B’, ‘G’, ‘J’, ‘O’, ‘Q’ and ‘S’ were over-segmented. When the threshold

angle was less than 40 degrees, more over-segmentations were resulted. On the other

hand, when the threshold angle was equal to 60 degrees, the characters ‘B’, ‘H’ and ‘L’

suffered from under-segmentation and, when the threshold angle was greater than

60 degrees, more under-segmentations were resulted. When the threshold angle

was equal to an angle between 40 and 60 (for example, 45 degrees), the problem of 

under-segmentation was eliminated, but there were some over-segmentation of characters

Page 16: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 16/31

440   R. Batuwita, V. Palade & D. C. Bandara 

‘B’, ‘G’, ‘J’ and ‘S’. From these results, it was clear that different threshold angles should

be used to get the expected segmentation results in different situations.

Then we made an effort to identify these different situations which require different

threshold angles for the meaningful segmentation of handwritten uppercase English

character skeletons. From this experiment, 17 of such different common situations

were identified. As an example, Fig. 16(a) depicts a segment (a “DLike” arc) that should

be preserved from over-segmentation. At this situation, it was found that the required

threshold angle should be 360 degrees to avoid the segmentation. On the other hand,Fig. 16(b) depicts a skeleton, which should be segmented into two separate segments

(“Positive Slanted” line and “DLike” curve). At this situation, it was found that the

required threshold angle should be around 30 degrees.

(a)

(b)

Fig. 16. Two different situations that require different threshold angles. (n_w_d = new written direction,

p_w_d = previous written direction).

6.6.  Fuzzy representation of different situations

In order to differentiate the identified situations requiring different threshold angles for

the meaningful segmentation of English uppercase character set, the following

characteristics of the pixels in current_segment  and tmp_segment  were used.

current_segment and tmp_segments are the same data structures discussed in Section 6.4.

•  A set of fuzzy features: ARC-NESS, Straightness, Line Type, Curve Type of 

current_segment .

Page 17: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 17/31

Fuzzy System for Offline Handwritten Character Recognition 441

•  The angle between the x-axis and the straight-line connecting the start and the end

points of  current_segment called theta.

•  The new_written_direction of tmp_segment called beta.

The fuzzy features of current_segment, namely, ARC-NESS( MARC ), Straightness( MSTR), 

  Line Type( MHL, MVL, MPS, MNS) and Curve Type(CLike, DLike, ULike, Alike) were

used to make the representation of the different situations more flexible. These features

can be calculated using the methods described in Section 7 and the membership functions

are depicted in Fig. 22. The angles theta and beta (Fig. 17) were calculated using the

method explained in calculating the written direction of sequence of pixels. Using these

characteristics an experimental set of fuzzy rules was developed. As an example, the

situation depicted in Fig. 16(a) could be represented with the following Rule (a) which

can be derived with the aid of Fig. 17.

Rule (a): IF ((current_segment. MARC  = “S” or “SM” or “M” or “ML” or “L” or

“VL”) AND ((theta <= 90) AND (90 <= beta <= 180))) THEN threshold_angle =

360 degrees.

Fig. 17. The derivation of rule (a).

On the other hand, the situation depicted in Fig. 16(b) can be represented with the

following Rule (b), and that can be derived using Fig. 18.

Rule (b): IF ((current_segment. MDL = “L” or “VL”) AND (225 < theta < 315) AND

(beta < 135)) THEN threshold_angle = 30 degrees.

Table 6 presents the complete experimental rule base with the corresponding

threshold angles associated with the meaningful segmentation of handwritten uppercase

English characters. Except for these special situations, for all other situations, 45 degrees

was used as the value for the threshold angle.

The results of this segmentation algorithm for English upper case characters are

presented in Section 9 (Fig. 29).

Page 18: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 18/31

442   R. Batuwita, V. Palade & D. C. Bandara 

Fig. 18. The derivation of rule (b).

Table 6. The Complete Experimental Rule Base. TH = Threshold. The membership functions of the fuzzy

features “S”, “SM”, … , “VL” are described in Section 7.

Fuzzy Features of the current_segment theta  beta TH Angle

MSTR = “S” to “VL” theta >= 315 or theta <= 10 beta > 260 360

theta >= 300 beta <= 60 360

MVL = “M” to “VL” 45 <= theta <135 beta >= 350 or beta <= 60 20

MHL = “LM” to “VL” theta < 10 or theta > 350 225 < beta <= 270 20

MHL = “L” or “VL” theta < 10 or theta > 350 225 < beta <= 270 20

MARC = “S” to “VL” 90 < theta < 180

theta <= 90

270 < theta

theta > 315

theta < 90

theta < 45

90 < theta < 180

beta < 135 or beta > 315

90 <= beta < 180

180 < beta < 270

beta < 180

beta >= 270

beta > 270

180 < beta < 250

90

360

90

90

360

90

360

MCL = “L” or “VL” theta <= 100 beta <= 90 5

MDL = “L” or “VL” theta < 160 30 < beta < 160 10

225 < theta < 315 beta < 135 30

45 < theta < 90 260 < beta <= 270 10

theta >= 270 beta >= 180 10

Page 19: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 19/31

Fuzzy System for Offline Handwritten Character Recognition 443

6.7.  Noise removal 

In the process of noise removal, the segments resulted due to the noise in the skeleton

area were removed. The average size of a character skeletons considered in this research

was 30*30 pixels. Accordingly, the minimum size of the segment was taken as 5 pixel

points. Therefore, the segments containing less than 5 pixel points were treated as noise

and discarded from the set of finally resulted segments.

7. Fuzzy Feature Extraction

After a character skeleton was properly segmented, 16 fuzzy features were extracted

from each resulted segment. These fuzzy features were previously used in the online

HCR research presented in Refs. 2 and 16. As the first step of the feature calculation, the

universe_of_discourse (UOD) of a character skeleton was determined. Most of the fuzzy

features were calculated with respect to the UOD of that character skeleton.

Definition 7. The universe_of_discourse of a character skeleton is the smallest

rectangular area into which the skeleton is fixed.2,6

Figure 19 depicts the UOD of thecharacter skeleton ‘A’.

Fig. 19. Character skeleton ‘A’ and its universe_ of_ discourse.

The following subsections describe the calculation of the fuzzy features in detail.

7.1.  Relative positionsThe relative positions of a given segment with respect to the UOD can be determined as

follows. Consider the character segment n shown in Fig. 20.

The coordinates of the center points of the nth segment are calculated as follows:

( ) ( ) ( )maxmin( ) / 2seg n seg n seg n

CENTER x x x= + , (2)

( ) ( ) ( )maxmin( ) / 2seg n seg n seg n

CENTER y y y= + . (3)

Page 20: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 20/31

444   R. Batuwita, V. Palade & D. C. Bandara 

Fig. 20. Relative position with respect to the universe of discourse.

The relative positions of the given segment can then be expressed as follows:

( ) )( )max minmin( ) / ( )seg n UODseg n UOD UOD

 HP CENTERmHP x x x x µ = = − − , (4)

( )( )min max min( ) / ( )seg nseg n UOD UOD UOD

VP CENTERmVP y y y y µ = = − − . (5)

The terms  HP and VP stand for “Horizontal Position” and “Vertical Position”

respectively.

7.2. Geometrical features

A given segment is determined to be an “arc” or a “straight-line”. The associated two

fuzzy features, “ ARC-NESS” and “STRAIGHTNESS” are complementary. That is,

( ) ( )1

seg n seg n ARC NESS STRAIGHTNESS µ µ − + = . (6)

The  ARC-NESS and the STRAIGHTNESS can be calculated by the using the following

two equations:

1 1

( ) ( ) ( )

1 N k k  

 N seg n seg n seg nSTRAIGHTNESS  p p p p

mSTR d d   µ +

=

= = ∑ (7)

( ) ( )(1 )seg n seg n ARC NESS STRAIGHTNESSmARC  µ µ −= = − (8)

where,1

( )

k k 

seg n

 p pd 

+stands for the straight-line distance between point k and point (k +1) on the

nth segment.  N depicts the number of pixels in the segment. A threshold value (e.g. 0.6)

could be used to determine whether the given segment is a straight line or an arc.

7.3.  Line types and relative lengths

If the given segment is determined to be a straight line, then the line type can be

calculated using the following equations.

( ) ( ) ( )max( ( ,90,90), ( ,90, 270))seg n seg n seg nVLmVL µ θ θ Λ Λ= = (9)

Page 21: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 21/31

Fuzzy System for Offline Handwritten Character Recognition 445

( ) ( ) ( ) ( )max( ( ,90,0), ( ,90,180), ( ,90,360))seg n seg n seg n seg n HLmHL µ θ θ θ  Λ Λ Λ= = (10)

( ) ( ) ( )max( ( ,90,135), ( ,90,315))seg n seg n seg n NSmNS µ θ θ Λ Λ= = (11)

( ) ( ) ( )max( ( ,90,45), ( ,90, 225))seg n seg n seg nPSmPS µ θ θ Λ Λ= = (12)

where θ seg(n) is the angle that the straight line between the first and the last point of the

segment form with the positive x-axis of the O-x-y plane. The terms VL, HL, NS and PS refer to vertical line, horizontal line, negative-slanted line and positive-slanted line,

respectively. Here, the function Λ is defined as in Ref. 13.

1 2(( ) / );2 2( ; , )

0;

b b x c b c x c

 x b c

otherwise

Λ

− − − ≤ ≤ + =

. (13)

Other three features to determine for a straight line are the relative lengths of thesegment with respect to the UOD. Those are the horizontal length (HLEN), the vertical

length (VLEN) and the slant length (SLEN) of the segment as described in Equations

(14)–(16).

1

( )( ) ( / ) N 

seg nseg n HLEN   p p

mHLEN d WIDTH   µ = = (14)

1

( )( ) ( / ) N 

seg nseg nVLEN   p p

mVLEN d HEIGHT   µ = = (15)

1

( ) ( )

( / _ ) N 

seg n seg n

SLLEN   p pmSLEN d SLANT LENGTH   µ = = (16)

where the values WIDTH, HEIGHT and SLANT_LENGHT are depicted in Fig. 21.

Fig. 21. The width, height and slant_length of the universe of discourse.

Page 22: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 22/31

446   R. Batuwita, V. Palade & D. C. Bandara 

7.4.  Arc types

If the given segment is determined to be an arc, the type of the arc can be calculated as

follows:

7.4.1. C-likeness and D-likeness

The shapes of C-like and D-like arcs are presented in Table 1. The C-likeness of an arc

can be calculated by (17).

( )

1

min 1,i

 N seg n

CL x

i

mCL l N   µ =

= = ∑ (17)

where

1; ( ) / 2

0;i

i S E 

 x

 x x xl

else

< +=

 

 xi is the horizontal projection of the point i of the segment on the Y -axis. S and E denote

the start and end points of the segment, respectively.

The D-likeness can be calculated by (18).

(1 )mDL mCL= − . (18)

7.4.2.  A-likeness and U-likeness

The shapes of A-like and U-like curves are presented in Table 1. The A-likeness of a

curve can be calculated by (19).

( )

1

min 1,i

 N seg n

 AL y

i

mAL l N   µ =

= = ∑  

where (19)

1; ( ) / 2

0;i

i S E 

 y

 y y yl

else

< +=

 

 yi is the vertical projection of the point i of the segment on the X -axis. S and E denote thestart and end points of the segment, respectively.

The U-likeness can be calculated by (20).

1mUL mAL= − . (20)

7.4.3. O-likeness

Let ( X center  , Y center ) denotes the center of the curve. This is the same point that is calculated

in determining the relative position of the segment in section  A. The expected radius of 

the curve can be calculated by

Page 23: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 23/31

Fuzzy System for Offline Handwritten Character Recognition 447

( ) ( )( ) ( ) ( )max maxmin min(( ) ( )) / 4

seg n seg nseg n seg n seg n EXPrad x x y y= − + − . (21)

The actual radius of the curve can be determined by summing up the straight line

distances from the center of the segment to each element belonging to the segment and

dividing it by the number of elements in the segment

( ) ( )

( )

( , )1

seg n seg n

i center center  

 N seg n

 ACTUAL   p z yi

rad d N  

=

= ∑

. (22)

The expected diameter for a curve with the radius is then calculated using the

expression

( ) ( )2seg n seg n  EXP EXPdiameter rad  π  = . (23)

The actual diameter of the given segment can be calculated by summing the straight line

distance between consecutive elements in the segment as in (24)

1

( )

1i i

 N seg n

 p p ACTUAL

 I 

diameter d  +

=

= ∑ (24)

where N stands for the number of elements in the segment. Then the following equations

are used to determine the O-likeness of the given segment.

( )1

( ); ( ) 1

1/ ( ); ( ) 1

seg nOL

  f x f x

 f x f x µ 

<= >

(25)

where,

( ) ( )( ) / seg n seg n EXP ACTUAL  f x diameter diameter  = (26)

( )2

( ); ( ) 1

1/ ( ); ( ) 1

seg nOL

g x g x

g x g x µ 

<= >

(27)

where,

( ) ( )( ) / seg n seg n EXP ACTUAL  f x rad rad  = (28)

and,

( ) ( ) ( )1 2min( , )seg n seg n seg n

OL OL OL  µ µ µ  = (29)

7.5.  Fuzzification

All of these features were fuzzified using the following membership functions (Fig. 22).

Page 24: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 24/31

448   R. Batuwita, V. Palade & D. C. Bandara 

Fig. 22. Membership functions used to fuzzify the crisp values calculated for each feature. VS = Very Small,

S = Small, SM = Small Medium, M = Medium, LM = Large Medium, L = Large, VL = Very Large.

8. Knowledge Representation and Reasoning

8.1.  Automatic rule base generation

In order to obtain an automatic and customizable learning method, a simple automatic

rule base generation approach that has been proposed in our earlier paper16

was adopted.

This approach is centralized around a main database consisting of two tables called

Character and Segment (the ER (entity-relationship) diagram of the database is given in

Fig. 23), which are used to store the knowledge extracted from each training character:

the character class (Char ), the number of segments ( NoOfSegments) and the individual

fuzzy characteristics of those segments.

Fig. 23. The database design.

8.2. Training and inference

After calculating the fuzzy features for all the segments in a given training character, the

user is queried for entering the corresponding alphanumeric character. Then, these data

were inserted into the above database. Likewise, the system could be trained with a

corresponding training character set one by one.

Page 25: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 25/31

Fuzzy System for Offline Handwritten Character Recognition 449

In order to implement a human-reasoning-like fuzzy reasoning method, as explained

in Section 1, the developed rule base was evaluated to generate a set of similarity

measures of the given character for the recognition (Ch) to the database characters

( Dbi). To do this, the calculated fuzzy values of each segment in the given character

(Ch) were compared with the values stored in the database of the corresponding

features of the segments belonging to the characters having the identical number

of segments as the given character. This comparison is done using min-max fuzzy

similarity method (30). The generated similarity values (Mi values in the column vector“Similarity” in 31) could be treated as the degrees of membership of the given character

(Ch) in different character classes in the database ( Dbi) having the same number

of segments.

1 1

min max( , ) min( , ) max( , )k k 

i i i i

i i

 A B a b a b= =

− = ∑ ∑ (30)

where A and B are vectors of length k containing fuzzy features.

1

n

 M 

Similarity

 M 

= ⋅

⋮ (31)

where 

 M i = Min(min-max(ch.segment [1], Dbi..segment [1]),…, 

min-max(ch.segment [m] ,Dbi.segment [m])). (32)

Ch = given character for the recognition.

 Dbi = ith character in the database having the same number of segments as Ch.

m = number of segments in the given character.

n = number of characters in the database having m number of segments.

Then, the character class of the database character having the maximum resemblance

to the given character (maximum Mi value), if that similarity value is greater than a

particular threshold value (e.g. 0.5), was selected as the class of the given character. If there is any unidentified character in a new writing style, the user can insert the

knowledge of that character in the database online.

9. Results and Discussion

The system was trained with the handwritten character patterns shown in Fig. 24. As an

example, the stored fuzzy features in the database for the training character pattern ‘A’

(in Fig. 25) is depicted in Fig. 26. Once another character ‘A’ (in Fig. 27) written in a

different writing style was presented to the system, the resemblance fuzzy values

Page 26: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 26/31

450   R. Batuwita, V. Palade & D. C. Bandara 

generated (after comparing the features of the given character with the database values)

are depicted in Table 7.

According to Table 7, it is clear that the similarity between the given character

pattern and the database character ‘A’, that is, 0.7923, is the highest value and also

greater than 0.5. Therefore, the system recognizes the character pattern in Fig. 27 as

character ‘A’.

Fig. 24. Training character set.

Fig. 25. Training character ‘A’, its skeleton and its segmentation.

Fig. 26. The calculated fuzzy features for the character pattern ‘A’ shown in Fig. 25.

Fig. 27. The character pattern and its segmentation used at the recognition phase.

Page 27: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 27/31

Fuzzy System for Offline Handwritten Character Recognition 451

Table 7. The resemblance fuzzy values generated by the system.

Fig. 28. A test image.

Then the image shown in Fig. 28 was loaded to the system for the recognition.

This image contains three character sets: Character Set 1, Character Set 2 and Character

Set 3. Character Set 1 is the same character set used at the training (as in Fig. 24), and

Character Set 2 and Char Set 3 are different new character sets. Figure 29 shows the

individual character isolation, the segmentation and the recognition with the generated

Database character

(having 3 segments)

Segment equalities between the test character

and the database character Character similarity

(Min)Segment 1 Segment 2 Segment 3

A

G

K

N

P

T

X

Z

0.8077

0.0889

0.7917

0.6787

0.1162

0.2059

0.3335

0.1818

0.7923

0.3637

0.4481

0.8750

0.5600

0.4545

0.7690

0.6073

0.8330

0.3333

0.3635

0.2352

0.3750

0.8181

0.3105

0.8461

0.7923

0.0889

0.3635

0.2352

0.1162

0.2059

0.3105

0.1818

Page 28: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 28/31

452   R. Batuwita, V. Palade & D. C. Bandara 

fuzzy similarity values. The same training character set (Character Set 1) was recognized

by the system producing 100% similarity value for all the characters. For the new

character sets, these values lied between 50% and 100%. As an example, character ‘B’ in

Character Set 2 is 78% equivalent to the character ‘B’ in Character Set 1, which is in the

knowledge base. If a particular character pattern is not correctly recognized by the

system, then we can customize the system with the knowledge of that character pattern.

This would simply add a new record to the database containing the features of that

character without altering the existing knowledge about other characters.The accuracy of this method is heavily dependent on the correct segmentation of 

the character skeletons into a set of meaningful segments. There are some situations

in which some character patterns were not segmented as expected (Fig. 30). In these

situations, the system failed to recognize these characters correctly.

Fig. 29. (color online) Segmentation and recognition of the characters in the sample test image.

Page 29: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 29/31

Fuzzy System for Offline Handwritten Character Recognition 453

Fig. 30. (color online) Problems with the segmentation algorithm.

10. Conclusion

Based on the obtained results, it can be concluded that the method proposed in this paper

would be a very flexible method for offline HCR. Especially, the customizability feature

of this method is of paramount importance compared to other available systems for HCR.

However, the proposed individual character segmentation algorithm has still to be further

improved in order to overcome the problems identified in the paper.

In order to generalize this method for most handwritten character recognition tasks

encountered in the real world, some more work has to be conducted. The heart of theproposed individual character segmentation algorithm is the fuzzy rule base which is used

to determine the threshold angle for the segmentation. This rule base could be adapted for

the meaningful segmentation of other character sets, such as English lower cases,

numeric or any other character set, after identifying the special situations requiring

different threshold angles, through empirical studies. Another future work would be to

develop a character separation algorithm for the connected characters. Moreover, other

situations, such as how to process the character skeletons having spurious branches or

more than one pixel thickness, have to be addressed too.

The main idea of this work, that was to use the capabilities of Fuzzy Logic to deal

with the impreciseness associated with the offline handwritten character patterns, could

also be adopted for other pattern recognition systems in Computer Vision, which deal

with the impreciseness arising due to incomplete knowledge. Moreover, the proposed

individual character segmentation algorithm could be amended for the segmentation of 

other objects associated with image processing.

References

1.  P. D. Gader, J. M. Keller, R. Krishnapuram, J.-H. Chiang, and M. A. Mohamed, Neural andfuzzy methods in handwriting recognition, Computer , Vol. 30, No. 2 (1997), pp. 79–86.

2.  R. Ranawana, V. Palade, and G. E. M. D. C. Bandara, An efficient fuzzy method for

handwritten character recognition, in Proc. of the 8th Int. Conf. on Knowledge-Based 

 Intelligent Information and Engineering Systems (Wellington, New Zealand, 2004), pp. 698–

707.

3.  I. Guyon, Applications of neural networks to character recognition,   International Journal of 

Pattern Recognition and Artificial Intelligence, Vol. 5 (1991), pp. 353–382.

4.  S. W. Lee, Off-line recognition of totally unconstrained handwritten numerals using

multiplayer cluster neural network, IEEE Trans. on Pattern Anal. Mach. Intell., Vol. 18, No. 6

(1996), pp. 648–652.

Page 30: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 30/31

454   R. Batuwita, V. Palade & D. C. Bandara 

5.  S. J. Lee and H. L. Tsai, Pattern fusion in feature recognition neural networks for handwritten

character-recognition,   IEEE Trans. on Sys., Man and Cyb., Vol. 28, No. 4 (1998), pp. 612–

617.

6.  A. Goltsev and D. Rachkovskij, Combination of the assembly neural network with a

perceptron for recognition of handwritten digits arranged in numeral strings, Pattern

 Recognition, No. 3 (2005), pp. 315–322.

7.  H. Miyao, M. Maruyama, Y. Nakano, T. Hananoi, and K. Sangyo, Off-line handwritten

character recognition by SVM on the virtual examples synthesized from on-line characters,

in Proc. Eighth Int. Conf. on Document Analysis and Recognition (Seoul, Korea, 2005),pp. 494–498.

8.  H. Bentounsi and M. Batouche, Incremental support vector machines for handwritten Arabic

character recognition, in Proc. Int. Conf. on Information and Communication Technologies:

From Theory to Applications (Damascus, Syria, 2004), pp. 477–478.

9.  H.-S. Park and S.-W. Lee, Offline recognition of large-set handwritten characters with

multiple hidden markov models, Pattern Recognition, Vol. 20, No. 2 (1996), pp. 231–244.

10.  H. Nishimura and M. Tsutsumi, Off-line hand-written character recognition using integrated

1DHMMs based on feature extraction filters, in Proc. Int. Conf. on Document Analysis and 

 Recognition (Seattle, USA, 2001), pp. 417–421.

11.  R. Zhang and X. Ding, Offline handwritten numeral recognition using orthogonal Gaussian

mixture model, in Proc. Int. Conf. on Image Processing (Thessaloniki, Greece, 2001),

pp. 1126–1129.

12.  P. Gader, J. Keller, and J. Cai, A fuzzy logic system for the detection and recognition of street

number fields on handwritten postal addresses, IEEE Trans. Fuzzy Systems (1995), pp. 83–96.

13.  A. Malaviya and L. Peters, Handwriting recognition with fuzzy linguistic rules, in Proc. of 

Third European Congress on Intelligent Techniques and Soft Computing (Aachen, Germany,

1995), pp. 1430–1434.

14.  K. P. Chan and Y. S. Cheung, Fuzzy-attribute graph with application to chinese character

recognition, IEEE Trans. On Sys, Man and Cyb ., No. 2 (1992), pp. 402–410.

15.  K. B. M. R. Batuwita and G. E. M. D. C. Bandara, An online adaptable fuzzy system for

offline handwritten character recognition, in Proc. 11th World Congress of International Fuzzy

Systems (Beijing, China, 2005), pp. 1185–1190.

16.  R. Ranawana, V. Palade, and G. E. M. D. C. Bandara, Automatic fuzzy rule base generation

for on-line handwritten alphanumeric character recognition,  Int. Journal of Knowledge-Based 

and Intelligent Engineering Systems, Vol. 9, issue 4 (2005), pp. 327–339.

17.  J. H. Chiang and P. D. Gader, Hybrid fuzzy-neural systems in handwritten word recognition,

 IEEE Trans. Fuzzy Systems, Vol. 5, No. 4 (1997), pp. 497–510.

18.  A.L. Koerich, Y. Leydier, R. Sabourin, and C. Y. Suen, A hybrid large vocabulary handwritten

word recognition system using neural networks with hidden Markov models, in Proc. 8th Int.

Workshop on Frontiers in Handwriting Recognition (Ontario, Canada, 2002), pp. 99–104.

19.  A. Bellili, M. Gilloux, and P. Gallinari, An MLP-SVM combination architecture for offlinehandwritten digit recognition: Reduction of recognition errors by Support Vector Machines

rejection mechanisms,   International Journal on Document Analysis and Recognition, No. 4

(2003), pp. 244–252.

20.  C. M. Holt, A. Stewart, M. Clint, and R. H. Perroll, An improved parallel thinning algorithm,

Communications of the ACM , Vol. 30, No. 2 (1987), pp. 156–160.

21.  L. Lam, C.Y. Suen, An evaluation of parallel thinning algorithms for character recognition,

  IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 17, No. 9 (1995),

pp. 914–919.

22.  L. Huang, G. Wan, and C. Liu, An improved parallel thinning algorithm, in Proc. 7th Int.

Conf. on Document Analysis and Recognition (Edinburgh, UK, 2003), pp. 780–783.

Page 31: Bandara2011IJAIT Handwriting

8/3/2019 Bandara2011IJAIT Handwriting

http://slidepdf.com/reader/full/bandara2011ijait-handwriting 31/31

Fuzzy System for Offline Handwritten Character Recognition 455

23.  K. B. M. R. Batuwita and G. E. M. D. C. Bandara, New Segmentation Algorithm for

Individual Offline Handwritten Character Segmentation, in Proc. of 2nd Int. Conf. on Fuzzy

Systems and Knowledge Discovery (Changsha, China, 2005), pp. 215–229.

24.  K. B. M. R. Batuwita and G. E. M. D. C. Bandara, An improved segmentation algorithm for

individual offline handwritten character segmentation, in Proc. Int. Conf. on Computational

  Intelligence for Modelling, Control and Automation (Vienna, Austria, 2005), Vol. 2,

pp. 982– 988.