Issue 6 - Asia Pacific Signal and Information Processing ... · popular professional social network...

16
APSIPA Social Net and Friend Labs Kenneth Lam, Thomas Zhang and C.-C. Jay Kuo One of the key missions of APSIPA is to provide education, research and development exchange platforms for both academia and industry. This mission is being achieved by presentations in APSIPA Annual Summit and Conferences, APSIPA Newsletters, publications in APSIPA Transactions on Signal and Information Processing (an open access journal), etc. Due to the recent emergence of social media such as Facebook and LinkedIn, we have another convenient platform for information sharing. This idea was first discussed in APSIPA ASC 2011 in Xi’an, China, with an overwhelming support from the APSIPA Board of Governors. Immediately after APSIPA ASC 2011, the APSIPA Social Net program was launched in 2011 December. It is built upon the world most popular professional social network - LinkedIn. A group called “Asia Pacific Signal and Information Processing Association – APSIPA” has been set up in LinkedIn. Any people with a LinkedIn account can apply to join. There are currently around 2500 APSIPA e-members Furthermore, we establish two types of sub- groups under the APSIPA main group. The first type of subgroups is created based on the 6 TC tracks. They are: APSIPA.SPS, APSIPA.SIPTM, APSIPA.SLA, APSIPA.BioSiPS, APSIPA.IVM and APSIPA.WCN. The 6 TC mem- bers are encouraged to use their subgroup to communicate The second type of subgroups is created based on countries/regions in the Asia-Pacific rim. Now, we have: APSIPA-USA (United States), APSIPA-CHN (China), APSIPA-JPN (Japan), APSIPA-KRO (South Korea), APSIPA-AUS (Australia), APSIPA-CAN (Canada), APSIPA-HKG (Hong Kong), APSIPA-IND (India), APSIPA-NZL (New Zealand), APSIPA-SGP (Singapore), APSIPA -TWN (Taiwan) and APSIPA-THA (Thailand). The country-based subgroup will serve as the basis to build up local chapters. To enrich contents in the APSIPA Social Net, we have filmed a sequence of interviews. They are accessible via http://www.apsipa.org/social.htm. In 2014 April, this task is assigned to a newly formed APSIPA Social Net Committee (ASNC). The ASNC is led by APSIPA VP-Member Relations and Development, Professor Kenneth Lam, with the following members: Ms Summer Jia He, University of Southern California, USA (Secretary) Dr Iman Ardekani, Unitec Institute of Technology, New Zealand Dr Cheng Cai, Northwest A&F University, China Dr Lu Fang, University of Science and Technology of China Issue 6 April 2014 In this issue APSIPA Social Net and Friends Lab Page 1 Text Dependent Speaker Verification Page 4 Recent Advances in Contrast Enhancement Page 6 Robust Speech Recognition and its LSI Design Page 9 The Road to Scientific Success Page 13 APSIPA ASC 2014 Call for Papers Page 14 APSIPA Distinguished Lecturer Page 3 DSP 2014 Call for Participation Page 15

Transcript of Issue 6 - Asia Pacific Signal and Information Processing ... · popular professional social network...

APSIPA Social Net and Friend Labs Kenneth Lam, Thomas Zhang and C.-C. Jay Kuo

One of the key missions of APSIPA is to provide education, research and development exchange platforms for both academia and industry. This mission is being achieved by presentations in APSIPA Annual Summit and Conferences, APSIPA Newsletters, publications in APSIPA Transactions on Signal and Information Processing (an open access journal), etc. Due to the recent emergence of social media such as Facebook and LinkedIn, we have another convenient platform for information sharing. This idea was first discussed in APSIPA ASC 2011 in Xi’an, China, with an overwhelming support from the APSIPA Board of Governors.

Immediately after APSIPA ASC 2011, the APSIPA Social Net program was launched in 2011 December. It is built upon the world most popular professional social network - LinkedIn. A group called “Asia Pacific Signal and Information Processing Association – APSIPA” has been set up in LinkedIn. Any people with a LinkedIn account can apply to join. There are currently around 2500 APSIPA e-members

Furthermore, we establish two types of sub-groups under the APSIPA main group. The first type of subgroups is created based on the 6 TC tracks. They are: APSIPA.SPS, APSIPA.SIPTM, APSIPA.SLA, APSIPA.BioSiPS, APSIPA.IVM and APSIPA.WCN. The 6 TC mem-bers are encouraged to use their subgroup to communicate The second type of subgroups is created based on countries/regions in the Asia-Pacific rim. Now, we have: APSIPA-USA (United States), APSIPA-CHN (China), APSIPA-JPN (Japan), APSIPA-KRO (South Korea), APSIPA-AUS

(Australia), APSIPA-CAN (Canada), APSIPA-HKG (Hong Kong), APSIPA-IND (India), APSIPA-NZL (New Zealand), APSIPA-SGP (Singapore), APSIPA-TWN (Taiwan) and APSIPA-THA (Thailand). The country-based subgroup will serve as the basis to build up local chapters. To enrich contents in the APSIPA Social Net, we have filmed a sequence of interviews. They are accessible via http://www.apsipa.org/social.htm. In 2014 April, this task is assigned to a newly formed APSIPA Social Net Committee (ASNC). The ASNC is led by APSIPA VP-Member Relations and Development, Professor Kenneth Lam, with the following members: Ms Summer Jia He, University of Southern

California, USA (Secretary) Dr Iman Ardekani, Unitec Institute of

Technology, New Zealand Dr Cheng Cai, Northwest A&F University, China Dr Lu Fang, University of Science and

Technology of China

Issue 6 April 2014

In this issue APSIPA Social Net and Friends Lab Page 1

Text Dependent Speaker Verification Page 4

Recent Advances in Contrast Enhancement Page 6

Robust Speech Recognition and its LSI Design Page 9

The Road to Scientific Success Page 13

APSIPA ASC 2014 Call for Papers Page 14

APSIPA Distinguished Lecturer Page 3

DSP 2014 Call for Participation Page 15

Page 2 APSIPA Newsletter Apr i l 2014

Dr Weiyao Lin, Shanghai Jiao Tong University, China

Dr Chuang Shi, Kansai University, Japan Prof. Chia-Hung Yeh, National Sun Yat-sen

University, Taiwan Dr Jiantao Zhou, University of Macau, Macau Another related program called “APSIPA Friend Labs” was launched in 2013 August through the joint effort of APSIPA VP-Institutional Relations and Education, Professor Thomas Zhang, and APSIPA VP-Member Relations and Development, Professor Kenneth Lam. An academia or industri-al lab is qualified to be an APSIPA friend lab if it has at least 10 current or former lab members who are full or associate members of APSIPA. A person can be an APSIPA associate member by joining the APSIPA Group in Linkedin. A person can be an APSIPA full member by clicking the "Join Us" button in the up-right corner of the APSIPA homepage and following the given in-structions. All APSIPA Friend Labs will be listed in the APSIPA website. Each lab has one page to post lab information, photos and a link to the lab home page. The provided data in the on-line ap-

plication form will be used to generate the friend lab page in the APSIPA website. There are about 150 APSIPA Friend Labs now and the number continues to grow. The list of current APSIPA Friend Labs is given in the following page: http://www.apsipa.org/friendlab/Application/lablist.asp. The APSIPA Social Net and Friend Labs are new initiatives for a professional community. They are still in their early stage, demanding further dedi-cation and innovation to reach the full potential. We sincerely invite you, your friends and col-leagues to join the APSIPA group to become its e-member. We would also like to recruit more research labs to become APSIPA Friend Labs. Your participation will make APSIPA a warm and interactive professional community.

Page 3 APSIPA Newsletter Apr i l 2014

Recent APSIPA Distinguished Lecturer Activity

1) Lecturer: Tatsuya Kawahara (Kyoto University, Japan)

Title: Recent trend of spoken dialogue systems

Time: 3:00PM, March 26, 2014

Venue: Room 1-303 Info Sci&Tech Building, Tsinghua University, Beijing, China

Host: Prof. Thomas Fang Zheng

There has been significant progress in spoken dialogue systems both in research and deployment.

This talk will give a brief history of spoken dialogue systems including their evolution and recent tech-

nical trend.

In the new perspective, back-end of spoken dialogue systems is extended from conventional relation-

al databases (RDB) to general documents, incorporating information retrieval (IR) and question-

answering (QA) techniques. This paradigm shift and the author's approaches are reviewed.

2) Lecturer: Tatsuya Kawahara (Kyoto University, Japan)

Title: Multi-modal sensing and analysis of poster conversations

Time: 3:00PM, March 28, 2014 (Friday)

Venue: Room 25-B407, Tianjin University, China

Host: Prof. Jianwu Dang

Conversations in poster sessions in academic events, referred to as poster conversations, pose inter-

esting and challenging topics on multi-modal signal and information processing. This talk gives an

overview of our project on the smart posterboard for multi-modal conversation analysis. The smart

posterboard has multiple sensing devices to record poster conversations, so we can review who came

to the poster and what kind of questions or comments he/she made. The conversation analysis com-

bines speech and image processing such as head tracking, speech enhancement and speaker diariza-

tion. Moreover, we are also investigating high-level indexing of interest and comprehension level of

the audience, based on their multi-modal behaviors during the conversation.

Page 4 APSIPA Newsletter Apr i l 2014

RSR2015 (Robust Speaker Recognition 2015) is

the largest publicly available speech corpus for

text-dependent robust speaker recognition. The

current release includes 151 hours of short dura-

tion utterances spoken by 300 speakers. RSR2015

is developed by the Human Language Technology

(HLT) department at Institute for Infocomm Re-

search (I2R) in Singapore. This newsletter de-

scribes RSR2015 corpus that addresses the reviv-

ing interest of text-dependent speaker recogni-

tion.

WHY ANOTHER CORPUS?

Speaker verification has reached a maturity that

allows state-of-the-art engines to be used for

commercial applications. During the last decade,

the effort of the community has lead to great im-

provements in terms of performance and usability

of speaker verification engines. Nevertheless, it is

well known in the community that performance of

text-independent speaker verification engines suf-

fers from the lack of enrolment and training data.

In the context of short duration, text-dependency

is known to improve accuracy of speaker verifica-

tion when dealing with short duration speech seg-

ments. Despite its strong commercial potential,

text-dependent speaker recognition lies on fringes

of the main stream speaker recognition. As a re-

sult, the speech resources available for such re-

search are either too small or inadequate to take

advantage of the technologies develop for the

mainstream text-independent speaker verification.

In view of the fact that text-dependent speaker

verification and user-customized command and

control, that recognizes a user-defined voice com-

mand at the same time identifies the speaker, are

useful application scenario. RSR2015 is developed

for the following objectives.

To provide a database of reasonable size that

supports significance test of text-independent

speaker recognition. RSR2015 contains record-

ings from 300 speakers during 9 sessions in or-

der to create enough target and impostor trials.

To have a gender-balanced database that allows

fair analysis of gender influence in text-

dependent speaker recognition. RSR2015 in-

volves 143 female and 157 male speakers.

To allow for analysis of the phonetic variability in

the context of text-constrained speaker recogni-

tion [2]. RSR2015 protocol includes more than

60 different utterances spoken by all speakers.

WHAT CAN WE DO WITH RSR2015?

RSR2015 allows for simulation and comparison of

different use-cases in terms of phonetic content.

For example, the most extreme constraint is to fix

a unique utterance for all users of the system all

the time. In the case where a larger set of fixed

pass-phrases is shared across users, the scenario

becomes very similar to user-customized com-

mand and control application. On the other hand,

it is possible to limit the phonetic content of the

speech utterance by randomly prompting se-

quences of phones or digits. In this case, the con-

text of the phone varies across sessions and espe-

cially between enrolment and test.

The choice of a specific scenario depends on what

constraints we would like to impose on the users.

Unfortunately, no existing database allows for a

comparison of speaker recognition engines across

scenarios in similar conditions. RSR2015 is de-

signed to bridge the gap. It consists of fixed pass-

phrases, short commands and random digit series

recorded in the same conditions.

DATABASE DESCRIPTION

RSR2015 contains audio recordings from 300

speakers, 143 female and 157 male in 9 sessions

each, with a total of 151 hours of speech. The

speakers were selected to be representative of the

ethnic distribution of Singaporean population, with

age ranging from 17 to 42.

The database was collected in office environment

using six portable devices (four smart phones and

two tablets) from different manufacturers. Each

speaker was recorded using three different devic-

es out of the six. The speaker was free to hold the

smart phone or tablet in a comfortable way.

To facilitate the recording, a dialogue manager

Text-dependent Speaker Verification and RSR2015 Speech Corpus

Anthony Larcher and Haizhou Li

Page 5 APSIPA Newsletter Apr i l 2014

was implemented on the portable devices as an

Android© application. The speakers interact with

the dialogue manager through a touch screen to

complete the recording. Each of the 9 sessions

for a speaker is organized into 3 parts:

PART 1 – Short-sentences for pass-phrase

style speaker verification (71 hours)

All speakers read the same 30 sentences from the

TIMIT database [3] covering all English phones.

The average duration of sentences is 3.2 seconds.

Example: “Only lawyers love millionaires.”

PART 2 – Short commands for user-

customized command and control (45 hours)

All speakers read the same 30 commands

designed for the StarHome applications. The

average duration of short commands is 2 seconds.

Example “Light on”

PART 3 – Random digit strings for speaker

verification (35 hours)

All speakers read the same 3 10-digit strings, and

10 5-digit strings. The digit strings are session

dependent.

INTERSPEECH 2014 SPECIAL SESSION AND

BENCHMARKING

During INTERSPEECH 2014, The Institute for

Infocomm Research together with IBM Research,

and the Centre de Recherche en Informatique de

Montreal (CRIM) propose a special session on Text

-Dependent Speaker Verification with Short

Utterance. The purpose of this special session is

to gather the research efforts from both the

academia and industries toward a common goal of

establishing a new baseline and explore new

directions for text-dependent speaker verification.

For ease of comparison across systems, the

RSR2015 database, that comes with several

evaluation protocols targeting at different

scenarios, has been proposed to support research

submitted for the INTERSPEECH 2014 special

session [4, 5].

WHERE TO GET THIS DATABASE?

The license is available at [Link]

Please contact Dr Anthony Larcher at Email

alarcher [at] i2r.a-star.edu.sg or Dr Kong-Aik Lee

at kalee [at] i2r.a-star.edu.sg.

REFERENCES:

[1] K. A. Lee, A. Larcher, H. Thai, B. Ma and H. Li,

“Joint Application of Speech and Speaker

Recognition for Automation and Security in Smart

Home”, in Annual Conference of the International

Speech Communication Association (Interspeech),

2011, pp 3317-3318.

[2] A. Larcher, P.-M. Bousquet, K. A. Lee, D.

Matrouf, H. Li and J.-F. Bonastre, “I-vectors in the

context of phonetically-constrained short utter-

ances for speaker verification”, In ICASSP 2012

[3] W. M. Fisher, G. R. Doddington and K. M.

Goudie-Marshall, “The DARPA speech recognition

research database: specifications and status”,

DARPA Workshop on Speech Recognition, 1986,

pp. 93-99

[4] A. Larcher, K. A. Lee, B. Ma and H. Li,

“RSR2015: Database for Text-Dependent Speaker

Verification using Multiple Pass-Phrases”, in

Annual Conference of the International Speech

Communication Association (Interspeech), 2012,

1580-1583

[5] A. Larcher, K. A. Lee, B. Ma and H. Li, “Text-

dependent Speaker Verification: Classifiers, Data-

bases and RSR2015,” Speech Communication,

2014, in press

APSIPA Membership Board

The APSIPA Membership Board (AMB) was set up in February 2014. The AMB is led by APSIPA VP-Member

Relations and Development, Professor Kenneth Lam, with the following members:

Prof. Waleed Abdulla, The University of Auckland

Prof. Woon-Seng Gan, Nanyang Technological

University

Prof. Hsueh-Ming Hang, National Chiao-Tung

University

Prof. Yo-Sung Ho, Gwangju Institute of Science and

Technology

Prof. Jiwu Huang, Shenzhen University

Prof. Hitoshi Kiya, Tokyo Metropolitan University

Prof. Chung-Nan Lee, National Sun Yat-sen

University

Dr Tan Lee, Chinese University of Hong Kong

Prof. Nam Ling, Santa Clara University

Prof. Ming-Ting Sun, University of Washington

The AMB members will serve a renewable 2-year term, with a maximum of two consecutive terms. The AMB will actively promote APSIPA to all researchers

and academics in the fields of signal and information processing, and also organize APSIPA activities for the benefit of APSIPA members and e-members.

RECENT ADVANCES IN CONTRAST ENHANCEMENT AND THEIR APPLICATIONS TOLOW-POWER IMAGE PROCESSING

Chulwoo Lee, Chul Lee, and Chang-Su Kim

School of Electrical Engineering, Korea University, Seoul, KoreaE-mails: {chulwoo, chul.lee, cskim}@mcl.korea.ac.kr

1. INTRODUCTION

In spite of recent advances in imaging technology, capturedimages often fail to preserve scene details faithfully or yieldpoor contrast ratios due to limited dynamic ranges. Contrastenhancement (CE) techniques can alleviate these problems byincreasing contrast ratios. Therefore, CE is an essential stepin various image processing applications, such as digital pho-tography and visual surveillance.

Conventional CE techniques can be categorized intoglobal and local approaches. A global approach derives asingle transformation function, which maps input pixel inten-sities to output pixel intensities, and applies it to all pixels inan image. Gamma correction, based on the simple power law,and histogram equalization (HE), which attempts to makethe histogram of pixel intensities as uniform as possible, aretwo popular global contrast enhancement techniques [1]. Alocal approach, on the other hand, derives and applies thetransformation function for each pixel adaptively. However,in general, a local approach demands higher computationalcomplexity and its level of CE is harder to control. There-fore, global CE techniques are more widely used for generalpurposes than local ones. In this article, we review recentadvances in global CE techniques and its applications.

2. CE TECHNIQUES AND ITS APPLICATIONS

HE is one of the most popular techniques to enhance low con-trast images due to its simplicity and effectiveness. However,it has some drawbacks, such as contrast over-stretching, noiseamplification, or contour artifacts. Various researches havebeen carried out to overcome these drawbacks. For exam-ple, several algorithms [2, 3, 4] divide an input histogram intosub-histograms and equalize them independently to reducethe brightness change between input and output images. Re-cently, histogram modification (HM) techniques, which ma-nipulate an acquired histogram before the equalization, havebeen introduced. Let us review the generalization of HM al-gorithms [5], and introduce its applications to low power im-age processing.

2.1. Histogram Modification

For notational simplicity, let us consider a typical 8-bit imag-ing system, in which the maximum gray-level is 255. Letx = [x0, x1, · · · , x255]

T denote the transformation function,which maps gray-level k in the input image to gray-level xk

in the output image [5]. Then, conventional histogram equal-ization can be achieved by solving

Dx = h (1)

where h = 255h/(1Th) is the normalized input histogramand D is the differential matrix [4].

HE enhances the dynamic range of an image and yieldsgood perceptual image quality. However, if many pixels areconcentrated within a small range of gray levels, the outputtransformation function may have an extreme slope. This de-grades the output image severely. To overcome this drawback,a recent approach to HM modifies the input histogram beforethe HE procedure to reduce extreme slopes in the transfor-mation function. For instance, Wang and Ward [6] clampedlarge histogram values and then modified the resulting his-togram using the power law. Also, Arici et al. [7] reducedthe histogram values for large smooth areas, which often cor-respond to background regions, and mixed the resulting his-togram with the uniform histogram. Lee et al. [5] employed alogarithm function to reduce large histogram values and pre-vent the transformation function from having too steep slopes.

In this recent HM approach, the first step can be expressedby a vector-converting operation m = f (h), where m de-notes the modified histogram. Then, the desired transforma-tion function x can be obtained by minimizing the cost func-tion

CH = ‖Dx−m‖2 (2)

where m is the normalized column vector of m.

2.2. Application to Low Power Image Processing

Whereas a variety of techniques have been proposed for theCE of general image, relatively little effort has been made toadapt the enhancement process to the characteristics of dis-play devices. Since a large portion of power is consumed bydisplay panels, it is essential to develop an image processing

Page 6 APSIPA Newsletter Apr i l 2014

(a) Pagoda (b) Ivy

(c) Lena (d) F-16

Fig. 1. Comparison of power-reduced output images. In eachsubfigure, the left image is obtained by the linear mappingmethod, and the right one by the PCCE algorithm in [5].

algorithm, which can save power in display panels as well asenhancing image contrast. Let us introduce display-specificCE algorithms by using the HM framework.

CE for emissive displays The power consumption of emis-sive displays, such as OLED devices, is proportional tosquared pixel intensities [5]. By adding the power consump-tion term to the initial objective function in (2), we obtain anew cost function

CE = ‖Dx−m‖2 + λxtHx (3)

where λ is a user parameter balancing between power andcontrast. By minimizing the cost function, we can reduce thepower while enhancing the contrast. If λ = 0, the powersaving is not considered. On the other hand, as λ increase,the output image gets darker to achieve power reduction.

Fig. 1 compares the linear reducing method with theproposed power-constrained contrast enhancement (PCCE)algorithm [5]. The linear method provides hazy output im-ages because of the low contrast. On the other hand, theproposed PCCE algorithm yields more satisfactory imagequalities.

CE for non-emissive displays Conventional lower powerimage processing techniques for non-emissive displays, suchas TFT-LCD devides, compensate a reduced backlight by in-creasing pixel intensities. Let us denote the backlight scalingfactor as b ∈ [0, 1]. Then, output pixel intensities are scaledby factor of 1/b. Since the maximum values are limited, thedisplay shows min(255,x/b). Therefore, the amount of qual-ity loss at each grey level is given by

xb = min(0,min(255,x/b)− 255) (4)

The quality loss is modeled by xtbHxb [8], where H is the

diagonal matrix, obtained from the input histogram h. Byintegrating this term into initial objective function (2), we ob-tain

CN = ‖Dx−m‖2 + μxtbHxb (5)

(a) (b) (c) (d)

Fig. 2. Brightness-compensated contrast enhancement resultson the “Baboon,” “Hats,” “Building,” and “Stream” images atb = 0.5. The input images in (a) are compensated by the lin-ear compensation method in (b), the Tsai et al.’s algorithm [9]in (d), and the BCCE algorithm [8] in (d).

where μ is a user-controllable parameter.

Fig. 2 shows the result when the proposed BCCE algo-rithm [8] for non-emissive displays is applied. In case of thelinear mapping, details in bright regions are lost. Althoughthe conventional algorithm in [9] preserves the details morefaithfully, the dynamic range is reduced and noise compo-nents are amplified. The proposed BCCE algorithm showsbetter image qualities.

3. CE TECHNIQUES USING HIGHER ORDERSTATISTICS

Recently, several algorithms have been proposed to considerthe joint distribution of neighboring pixel values for CE.These algorithms exploit the 2D histogram of neighboringpixel values, instead of the 1D histogram of individual pixelvalues. Celik and Tjahjadi [10] obtained a target 2D his-togram by minimizing the sum of the differences from aninput 2D histogram and the uniform histogram, and mappedthe diagonal elements of the input histogram to those of thetarget histogram. Lee et al. [11] attempted to emphasize thegray-level differences of neighboring pixels that occur fre-quently in an input image. To this end, they proposed a tree-like data structure, called layered difference representation(LDR), and derived an efficient solution to the optimizationproblem. Shu and Wu [12] also exploited a 2D histogramof pixel values and computed a transformation function tomaximize the expected image contrast. Among these new CEalgorithms, let us briefly summarize our LDR work [11].

Page 7 APSIPA Newsletter Apr i l 2014

(a) (b) (c) (d)

Fig. 3. CE results on the “Eagle” and “Night view” image:(a) input image, (b) HE, (c) CVC [10], and (d) the LDR [11].

3.1. Layered Difference Representation

In LDR, a 2D histogram is used to find a desirable transforma-tion function x. Suppose that a pair of adjacent pixels in theinput image have gray-levels k and k+ l, then their differencel is mapped to the difference

dlk = xk+l − xk, 0 ≤ k ≤ 255− l (6)

in the output image. Let the 2D histogramh(k, k+l) count thenumber of pairs of adjacent pixels with gray-levels k and k+ lin the input image. Also, let hl

k = h(k, k+l)+h(k+l, k). Theobjective is to design the transformation function x, whichyields a large output difference dl

k when hlk is a large number.

In other words, frequently occurring pairs of pixel values inthe input image should be clearly distinguished in the outputimage. To this end, dlk should be set proportionally to h l

k:

dlk = κlhlk (7)

where κl is a normalizing constant in [11].The difference variables, dl

k’s, are grouped according tothe input gray-level difference l, and each group is referred toas a layer in the hierarchical structure of LDR. Notice that d l

k

can be decomposed into the difference variables d1k’s at layer

1 by

dlk =k+l−1∑

i=k

d1i , 0 ≤ k ≤ 255− l. (8)

Also, the transformation function is determined by the differ-ence variables at layer 1,

x0 = 0,

xk =

k−1∑

i=0

d1i , 1 ≤ k ≤ 255.(9)

From the relationships in (7) and (8), a difference vector d =[d10, d

11, . . . , d

1254]

T is determined at each layer. Then, thosedifference vectors at all layers are aggregated into a unifiedvector, which is finally used to construct the transformationfunction x using (9).

In Fig. 3, HE and CVC cannot handle large histogramproperly. Specifically, HE and CVC increase the contrast on

the smooth region excessively, producing contour artifacts on“Eagle” image. On the contrary, LDR algorithm [11] exhibitsbetter contrast by exploiting the full dynamic range. Imagequalities are degraded when scenes are captured in very lowlight conditions. “Night view” image indicates a dark inputimage, which contains noise components. HE yields an ex-tremely noisy image. Although CVC exploits the 2D his-togram information, it still experiences the over-enhancementproblem due to the high histogram peak. On the other hand,LDR algorithm [11] alleviates noise and clarifies the detailsof the buildings.

4. CONCLUSIONS

This article introduced the HM framework, formulated asconstrained optimization problem. Also, we verified that HMtechniques can be applied to the power-constrained imageprocessing algorithms, which can enhance image quality andreduce display power consumption simultaneously. More-over, we briefly reviewed one of the CE techniques usinghigher order statistics, which become more popular in recentyears.

5. REFERENCES

[1] R. C. Gonzalez and R. E. Woods, Digital Image Processing, PrenticeHall, 3rd edition, 2007.

[2] Y.-T. Kim, “Contrast enhancement using brightness preserving bi-histogram equalization,” vol. 43, no. 1, pp. 1–8, Feb. 1997.

[3] Y. Wang, Q. Chen, and B. Zhang, “Image enhancement based on equalarea dualistic sub-image histogram equalization method,” vol. 45, no.1, pp. 68–75, Feb. 1999.

[4] Soong-Der Chen and A.R. Ramli, “Minimum mean brightness errorbi-histogram equalization in contrast enhancement,” vol. 49, no. 4, pp.1310–1319, Nov. 2003.

[5] C. Lee, C. Lee, Y.-Y. Lee, and C.-S. Kim, “Power-constrained contrastenhancement for emissive displays based on histogram equalization,”IEEE Trans. Image Process., vol. 21, no. 1, pp. 80–93, Jan. 2012.

[6] Q. Wang and R. K. Ward, “Fast image/video contrast enhancementbased on weighted thresholded histogram equalization,” vol. 53, no. 2,pp. 757–764, May 2007.

[7] T. Arici, S. Dikbas, and Y. Altunbasak, “A histogram modificationframework and its application for image contrast enhancement,” IEEETrans. Image Process., vol. 18, no. 9, pp. 1921–1935, Sept. 2009.

[8] C. Lee, J.-H. Kim, C. Lee, and C.-S. Kim, “Optimized brightness com-pensation and contrast enhancement for transmissive liquid crystal dis-plays,” IEEE Trans. Circuits Syst. Video Technol., vol. 24, pp. 576–590,Apr. 2014.

[9] P.-S. Tsai, C.-K. Liang, T.-H. Huang, and H. H. Chen, “Image enhance-ment for backlight-scaled TFT-LCD displays,” IEEE Trans. CircuitsSyst. Video Technol., vol. 19, no. 4, pp. 574–583, Apr. 2009.

[10] T. Celik and T. Tjahjadi, “Contextual and variantional contrast en-hancement,” IEEE Trans. Image Process., vol. 20, pp. 1921–1935,Sept. 2009.

[11] C. Lee, C. Lee, and C.-S. Kim, “Contrast enhancement based on lay-ered difference representation of 2-D histograms,” IEEE Trans. ImageProcess., vol. 22, pp. 5372–5384, Dec. 2013.

[12] X. Shu and X. Wu, “Image enhancement revisited: from first order tosecond order statistics,” in Proc. IEEE ICIP, Sep. 2013, pp. 886–890.

Page 8 APSIPA Newsletter Apr i l 2014

Page 9 APSIPA Newsletter Apr i l 2014

Robust Speech Recognition and its LSI Design

Yoshikazu MIYANAGA

Hokkaido University, Sapporo, 060-0814 Japan

1 Introduction

This letter introduces recent noise robust speech

communication techniques and its implementation

to a small robot. This speech communication

system consists of automatic speech detection,

speech recognition and speech rejection.

One of current speech recognition systems are

considered as a system consisting of phoneme-

based speech recognition and language model [1],

[2]. As another speech recognition approach, a

phrase-based speech recognition has been also

explored [3] - [5]. Compared with the system

with phoneme-based speech recognition and

language model, the phrase- based speech

recognition can provide higher recognition

accuracy in case of various noise environments.

On the other hand, it requires high calculation

cost and a lot of training database for all target

speech phrases. By using efficient LSI design of a

speech recognition system, the author can realize

real time speech recognition with low power con-

sumption. The embedded speech communication

module recognizes all target speech phrases with

high recognition accuracy where non target

speeches are automatically rejected. The smart

robot system can answer only for the target

speech phrases. About the recognition accuracy of

this system, the system can realizes 85% - 99%

under 0 - 20 dB SNR and echo environments.

2 Speech Communication System

When we consider an embedded automatic speech

recognition (ASR) system, e.g., ROBOT with ASR,

many poor surroundings and long distance micro-

phone conditions should be considered. In case of

exhibition rooms, out-door, and general house

environments, the SNR should be always lower

than 20dB. In case of free-hands and general

human communications, the 10 cm - 5 m distanc-

es should be also considered. The introduced ASR

can recognize speech under the above various

conditions accurately. The conditions used in our

development are considered as (1) white noise,

speech-like noise and factory noise with 5dB SNR,

(2) echo environment less than 150ms and (3)

medium distance ( < 5m ) from a microphone.

In order to keep the high performance of ASR, our

system only recognizes the set of selected

phrases. Any other words and phrases except tar-

get speech are rejected by our system. In Fig.1,

the overview of our speech communication system

is described. The system consists of automatic

speech detection, i.e., automatic voice activity de-

tection (VAD), automatic speech recognition

(ASR), and automatic speech rejection.

The ASR in this system has been designed as a

noise robust ASR. In order to make the system

robust against various noises, the combination of

both Dynamic Range Adjustment (DRA) and

Running Spectrum Filtering (RSF) has been

developed [3] - [5]. The noise-biased speech fea-

tures can be compensated by using these meth-

ods and thus its recognition accuracy can be im-

proved under many noise circumstances.

The automatic speech rejection has been used for

the elimination of speech-like sounds and non-

target speech phrases. The ASR in this system

only concentrates a target phrase, e.g., a com-

mand word, and thus the set of these command

words has a limited number of phrases. The sys-

tem considers they are target and registered into

the dictionary. Any other phrases are regarded as

the out-of-dictionary words and they are rejected

by this system. This mechanism has been realized

Figure 1: Our Speech Communication System

Page 10 APSIPA Newsletter Apr i l 2014

by using several criterions for the likelihood

values observed from HMM in the recognition

stage. These criterions are applied for each target

phrase and the optimum conditions are

determined by using the speech database used in

a training stage.

3 LSI Design of Speech Communication

System

As another important property in human-machine

interface, a response time should be considered.

When a general human interaction is considered,

100—200msec responses can keep a real-time

response. In other words, if the system can

respond its answer to a user within 150msec, a

user feels its response becomes real-time.

Compared with other real-time processing, i.e.,

parallel/pipeline computer, wireless communica-

tion and multi-media machine to machine

communications, the time period of 150msec

seems to be slow and thus it is not difficult to

design such systems. However, when we would

like to design the system with low power

consumption and real-time processing at the

same time, its design becomes complicated. In

order to realize both of them, we have developed

a full custom LSI for our speech communication

system. Fig.2 shows its block diagram.

The architecture consists of HMM calculation mod-

ule, noise robust processing module, and MFCC

calculation module. In addition, the master

and slave connections are given through BUS

CONTROL and thus several same LSIs can be con-

nected in a system.

The first designed LSI was fabricated with Rohm

CMOS 0.35 �m in 2000. It can respond its

recognition result within 0.180msec/phrase,

10MHz clock and 93.2mW power consumption

where 1,000 phrases are recognized by using this

LSI. The real time processing has been realized

for 1,000 target phrases. Using parallel connec-

tions of these LSIs, the system can recognize

more than 1,000 phrases within the same

response time. Using this LSI, a small ASR

evaluation board has been developed. Its specifi-

cation is as follows:

1. Phrase ASR system where 1,000 maximum

number of phrases can be registered.

Figure 2: Block Diagram of Speech Communication System

Figure 3: Accuracy of Speech Communication

Page 11 APSIPA Newsletter Apr i l 2014

2. FLASH memory of NOR type 32 bit and 16

bit ADC with at most 44.4 kHz sampling

frequency are embedded.

3. The host connections are SPI, I2C, and

UART.

4. The maximum clock is 22.5792 MHz and

3.3v power voltage is supplied. The 180

msec response time can be realized with

10MHz clock.

5. The small scale size is L: 55mm x W: 44mm

x H: 12mm.

Using the evaluation board, the results of the

field experiments are shown in Fig.3. In these

experiments, the microphone distances are 30cm,

60cm and 90cm where these distances are

generally required by many users. The total

number of phrases is 15. For the experiments on

echo robustness, the circumstances on a meeting

room, an elevator and a stairs hall have been

used. For the experiments of noise robustness,

the circumstances on three different conditions of

cars have been used.

As the total performance on echo and noise

circumstances, the developed ASR system can

realize 93.0% correctness (speech recognition

accuracy and rejection accuracy). When we use

this system in exhibitions, the system can

recognize all words under the condition on 70 - 75

noisy level and 2 - 5 m microphone distance.

4 Speech Communication Module Embed-

ded Smart Info-media System

The introduced speech communication module is

embedded into a small robot. It is shown in Fig.4.

The total system of this small robot has been

designed by Raytron [6]. The two microphones at

the ear of this robot are integrated and thus a

sound in front of this robot is mainly recorded and

analyzed.

The name of this robot is CHAPIT and it was

announced in the late year of 2007. It can

recognize only 300 words at the first time.

However, a user can speak any words against the

robot with 30cm—5m distance and 5 - 30 dB

noisy/echo circumstances and then the robot

answer for the registered phrases. Its interface

style seems to be quite familiar among human to

human communications.

5 Conclusion

In this letter, a speech communication system has

been introduced. It can provide high accuracy of

speech recognition under noisy and echo

circumstances. The robust techniques are

implemented into automatic speech detection,

automatic speech recognition and automatic

speech rejection. As current speech recognition

accuracy, the system can realize 85% - 99%

under 0 - 20 dB SNR and 150msec echo

environments.

Figure 4: Chapit

Page 12 APSIPA Newsletter Apr i l 2014

References

[1] Sadaoki Furui, "Speaker-Independent isolat-

ed word recognition using dynamic features

of speech spectrum", IEEE Trans. on

Acoust., Speech, and Signal Process.,

vol.ASSP-34, no.1 p.p.52-59, Feb. 1986.

[2] A. Lee and T. Kawahara, “ Recent Develop-

ment of Open-Source Speech Recognition

Engine Julius, ” Asia-Pacific Signal and Infor-

mation Processing Association Annual Sum-

mit and Conference, pp.131-137, Oct.,

2009.

[3] S.Yoshizawa, Y.Miyanaga, N.Wada and

N.Yoshida: "A Low- Power LSI Design of Jap-

aneseWord Recognition System", Proceed-

ings of IEICE International Technical Confer-

ence on Circuits/Systems, Computers and

Communications 2002, 1 (1): 98-101 (2002)

[4] Shingo Yoshizawa, Naoya Wada, Noboru

Hayasaka, Yoshikazu Miyanaga, "Scalable

Architecture for Word HMM-Based Speech

Recognition and VLSI Implementation in

Complete System", IEEE Transactions on

Circuits and Systems I, Vol.53, No.1, pp.70-

77, January 2006.

[5] Yoshikazu Miyanaga, Wataru Takahashi,

Shingo Yoshizawa, "A Robust Speech Com-

munication into Smart Info-Media System,"

IEICE Transactions on Fundamentals of Elec-

tronics, Communications and Computer Sci-

ences, Vol. E96-A, No. 11, pp. 2074-2080,

Nov. 2013.

[6] Yoshiyuki Miyazaki, Yoshikazu Miyana-

ga,“New Development and Enterprise of Ro-

bust Speech Recognition Systems ”, 2010

International Workshop on Information

Communication Technology, August 2010.

Page 13 APSIPA Newsletter Apr i l 2014

- Book Review -

The Road to Scientific Success: Inspiring Life Stories of Prominent

Researchers

Edited by Deborah D L Chung (University at

Buffalo, State University of New York, USA);

email: [email protected]

This book series describes the road to scientific

success, as experienced and described by

prominent researchers. The focus on research

process (rather than research findings) and on

personal experience is intended to encourage the

readers, who will be inspired to be dedicated and

effective researchers.

The objectives of this book series include the

following.

To motivate young people to pursue their

vocations with rigor, perseverance and direction

inspire students to pursue science or

engineering

To enhance the scientific knowledge of

students that do not major in science or

engineering

To help parents and teachers prepare the next

generation of scientists or engineers

To increase the awareness of the general

public to the advances of science

To provide a record of the history of science

Due to the historic significance of the life stories

and the impact of the scientific advances behind

each story, this book is expected to be valuable

from the viewpoint of the history of science.

The uniqueness of the book series relates to the

following.

Untold life stories in the words and photos of

world-class scientists

Keys to scientific success elucidated

Scientific research strategies disclosed

Career preparation methods demonstrated

History of science revealed

Up-to-date science covered in a way that readers

with little or no science background can

understand. This book series is suitable for use in

courses on introduction to science or engineering,

professional success methodology, scientific

research methodology and history of science.

Suitable readers include university students

(undergraduate and graduate levels), college

students, high school students, educators,

teachers, education workers/administrators,

historians, technology business personnel,

technology transfer professionals, research

managers, scientists, engineers, technicians,

parents and the general public.

wabd002
Typewritten Text
wabd002
Typewritten Text
wabd002
Typewritten Text
Adapted by Waleed Abdulla from http://www.worldscientific.com/series/rssilspr
wabd002
Typewritten Text
wabd002
Typewritten Text
wabd002
Typewritten Text
wabd002
Typewritten Text
wabd002
Typewritten Text
wabd002
Typewritten Text

Asia-Pacific Signal and Information Processing Association Annual Summit and Conference 2014

December 9-12, 2014, Chiang Mai, Thailand

Call for Papers Welcome to the APSIPA Annual Summit and Conference 2014 located in Chiang Mai, the most culturally significant city in northern Thailand. Chiang Mai is a former capital

of the Kingdom of Lanna (1296-1768) and is well known of historic temples, arresting scenic beauty, distinctive festivals, temperate fruits and invigorating cool season cli-

mate. The sixth annual conference is organized by Asia-Pacific Signal and Information Processing Association (APSIPA) aiming to promote research and education on signal

processing, information technology and communications. The annual conference was previously held in Japan (2009), Singapore (2010), China (2011), USA (2012) and Tai-

wan (2013). The field of interest of APSIPA concerns all aspects of signals and information including processing, recognition, classification, communications, networking,

computing, system design, security, implementation, and technology with applications to scientific, engineering, and social areas.

The regular technical program tracks and topics of interest include (but not limited to): 1. Biomedical Signal Processing and Systems (BioSiPS)

1.1 Biomedical Imaging1.2 Modeling and Processing of Physiological Signals (EKG, MEG, EKG, EMG, etc.)1.3 Biologically-inspired Signal Processing1.4 Medical Informatics and Healthcare Systems1.5 Genomic and Proteomic Signal Processing

2. Signal Processing Systems: Design and Implementation (SPS)2.1 Nanoelectronics and Gigascale Systems 2.2 VLSI Systems and Applications2.3 Embedded Systems2.4 Video Processing and Coding2.5 Signal Processing Systems for Data Communication

3. Image, Video, and Multimedia (IVM)3.1 Image/video Coding3.2 3D image/video Processing3.3 Image/video Segmentation and Recognition3.4 Multimedia Indexing, Search and Retrieval3.5 Image/video Forensics, Security and Human Biometrics3.6 Graphics and Animation3.7 Multimedia Systems and Applications

4. Speech, Language, and Audio (SLA) 4.1 Speech Processing: Analysis, Coding, Synthesis, Recognition and Understanding4.2 Natural Language Processing: Translation, Information Retrieval, Dialogue4.3 Audio Processing: Coding, Source Separation, Echo Cancellation, Noise Suppression4.4 Music Processing

5. Signal and Information Processing Theory and Methods (SIPTM)5.1 Signal Representation, Transforms and Fast Algorithms 5.2 Time Frequency and Time Scale Signal Analysis5.3 Digital Filters and Filter Banks5.4 DSP Architecture5.5 Statistical Signal Processing5.6 Adaptive Systems and Active Noise Control5.7 Sparse Signal Processing5.8 Signal Processing for Communications5.9 Signal Processing for Energy Systems5.10 Signal Processing for Emerging Applications

6. Wireless Communications and Networking (WCN)6.1 Wireless Communications: Physical Layer6.2 Wireless Communications and Networking: Ad-hoc and Sensor Networks, MAC, Wireless Rout-

ing and Cross-layer Design 6.3 Wireless Networking: Access Network and Core Network 6.4 Security and Cryptography 6.5 Devices and Hardware

Submission of Papers Prospective authors are invited to submit either full papers, up to 10 pages in length, or short papers up to 4 pages in length, where full papers will be for the single-track oral presentation and short papers will be mostly for poster presentation. The conference proceedings of the main conference will be pub-lished, available and maintained at the APSIPA website.

Organizer Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI) Association of Thailand Academic Sponsor Asia-Pacific Signal and Information Processing Association (APSIPA) Organizing Committees Honorary Co-Chairs Sadaoki Furui, Tokyo Institute of Technology, Japan K. J. Ray Liu, University of Maryland, USA Prayoot Akkaraekthalin, KMUTNB, Thailand General Co-Chairs Kosin Chamnongthai, KMUTT, Thailand C.-C. Jay Kuo, University of Southern California, USA Hitoshi Kiya, Tokyo Metropolitan University, Japan Technical Program Co-Chairs Pornchai Supnithi, KMITL, Thailand Takao Onoye, Osaka University, Japan Hsueh-Ming Hang, National Chiao Tung University, Taiwan Anthony Kuh, University of Hawaii at Manoa, USA Takeshi Ikenaga, Waseda University, Japan Chung-Hsien Wu, National Cheng Kung University, Taiwan Yodchanan Wongsawat, Mahidol University, Thailand Oscar Au, HKUST, Hong Kong Tomoaki Ohtsuki, Keio University, Japan Forum Co-Chairs Antonio Ortega, University of Southern California, USA Waleed Abdulla, The University of Auckland, New Zealand Homer Chen, National Taiwan University, Taiwan Vorapoj Patanavijit, Assumption University, Thailand Panel Session Co-Chairs Mark Liao, IIS, Academia Sinica, Taiwan Li Deng, Microsoft Research, USA Jiwu Huang, Sun Yat-Sen University, China Kazuya Takeda, Nagoya University, Japan Special Session Co-Chairs Minoru Okada, Nara Institute of Science and Technology, Japan Gan Woon Seng, Nanyang Technological University, Singapore Mrityunjoy Chakraborty, IIT Kharagpur, India Tutorial Session Co-Chairs Kenneth Lam, The Hong Kong Polytechnic University, Hong Kong Toshihisa Tanaka, TUAT, Japan Tatsuya Kawahara, Kyoto University, Japan Sumei Sun, I2R, A*STAR, Singapore Publicity Co-Chairs Yoshio Itoh, Tottori University, Japan Yo-Sung Ho, Gwangju Institute of Science and Technology, Korea Thomas Fang Zheng, Tsinghua University, China Chung-Nan Lee, National Sun Yat-sen University, Taiwan Chalie Charoenlarpnopparut, Thammasat University, Thailand Publication Co-Chairs Yoshinobu Kajikawa, Kansai University, Japan Nipon Theera-umpon, Chiangmai University, Thailand, Financial Chairs Rujipan Sampanna, Bangkok University, Thailand Pairin Kaewkuay, ECTI, Thailand Local Arrangement Chairs Suttichai Premrudeeprechacharn, Chiangmai University, Thailand Sermsak Uatrongjit, Chiangmai University, Thailand Sathaporn Promwong, KMITL, Thailand General Secretaries Werapon Chiracharit, KMUTT, Thailand Boonserm Kaewkamnerdpong, KMUTT, Thailand

Important Dates Submission of Proposals for Special Sessions, Forum, Panel & Tutorial Sessions May 9, 2014 Submission of Full and Short Papers June 6, 2014 Submission of Papers in Special Sessions July 4, 2014 Notification of Papers Acceptance Aug. 29, 2014 Submission of Camera Ready Papers Sep. 26, 2014 Author Registration Deadline Sep. 26, 2014 Tutorial Session Date Dec. 8, 2014 Summit and Conference Dates Dec. 9-12, 2014

E-mail : [email protected]

APSIPA Who’s Who

President: C.-C. Jay Kuo, University of Southern California, USA

President-Elect: Haizhou Li, Institute for Infocomm Research,

A*STAR, Singapore

Past President: Sadaoki Furui, Tokyo Institute of Technology, Japan.

VP - Conferences: Susanto Rahardja, Institute for Infocomm Re-

search, Singapore

VP - Industrial Relations and Development: Haohong Wang, TCL

Corporation, USA

VP - Institutional Relations and Education Program:

Thomas Zheng, Tsinghua University, China

VP - Member Relations and Development: Kenneth Lam, The Hong

Kong Polytechnic University, Hong Kong

VP - Publications: Tatsuya Kawahara, Kyoto University, Japan

VP - Technical Activities: Oscar Au, Hong Kong University of Science

and technology,

Hong Kong

Members-at-Large:

Waleed H. Abdulla, The University of Auckland, New Zealand

Kiyoharu Aizawa, The University of Tokyo, Japan

Mrityunjoy Chakraborty, Indian Institute of Technology, India

Kosin Chamnong, King Mongkut's University of Technology, Thailand

Homer Chen, National Taiwan University, Taiwan

Woon-Seng Gan, Nanyang Technological University, Singapore

Hsueh-Ming Hang, National Chiao-Tung University, Taiwan

Yo-Sung Ho, Gwangju Institute of Science and Technology, Korea

Jiwu Huang, Sun Yat-Sen University, China

Anthony Kuh, University of Hawaii at Manoa, USA

Tomoaki Ohtsuki, Keio University, Japan

Wan-Chi Siu, The Hong Kong Polytechnic University, Hong Kong

Ming-Ting Sun, University of Washington, USA

Headquarters

Address:

Asia Pacific Signal and Information Processing Association,

Centre for Signal Processing,

Department of Electronic and Information Engineering,

The Hong Kong Polytechnic University,

Hung Hom, Kowloon, Hong Kong.

Officers:

Director: Wan-Chi Siu, email: [email protected]

Manager: Kin-Man Lam, Kenneth,

email: [email protected]

Secretary: Ngai-Fong Law, Bonnie,

email: [email protected]

Treasurer: Yuk-Hee Chan, Chris,

email: [email protected]

Are you an APSIPA member?

If not, then register online at

http://www.apsipa.org

APSIPA Newsletter Editorial Board Members Waleed H. Abdulla (Editor-in-Chief), The University of Auckland, New Zealand.

Kenneth Lam (Deputy EiC), The Hong Kong Polytechnic University, Hong Kong

Iman T. Ardekani, Unitec Institute of Technology, New Zealand.

Oscar Au, Hong Kong University of Science and technology, Hong Kong

Thomas Zheng, Tsinghua University, China

Takeshi Ikenaga, Waseda University, Japan

Yoshinobu Kajikawa, Kansai University, Japan

Hitoshi Kiya, Tokyo Metropolitan University, Japan

Anthony Kuh, University of Hawaii, USA

Haizhou Li, Institute for Infocomm Research, A*STAR, Singapore

Tomoaki Ohtsuki, Keio University, Japan

Yodchanan Wongsawat, Mahidol University, Thailand

Chung-Hsien Wu, National Cheng Kung University, Taiwan

Page 16 APSIPA Newsletter Apr i l 2014