Adding Japanese language synthesis support to the eSpeak ...

24
Adding Japanese language synthesis support to the eSpeak system Richard Pronk 10121897 Bachelor thesis Credits: 18 EC Bachelor Opleiding Kunstmatige Intelligentie University of Amsterdam Faculty of Science Science Park 904 1098 XH Amsterdam Supervisor dr. D.J.M. (David) Weenink Institute of Phonetic Sciences Faculty of Humanities University of Amsterdam Spuistraat 210 1012VT Amsterdam June 28th, 2013

Transcript of Adding Japanese language synthesis support to the eSpeak ...

system
Bachelor Opleiding Kunstmatige Intelligentie
1098 XH Amsterdam
Institute of Phonetic Sciences Faculty of Humanities
University of Amsterdam Spuistraat 210
1012VT Amsterdam
Abstract
In this paper we describe an addition to the eSpeak system which is capable of pronouncing Japanese language. This implementation is used for automatic segmentation of Japanese speech from Japanese text. The speech synthesiser that we use is part of the praat speech analyse program and is based on the eSpeak text-to-speech engine. Because the Japanese writing system is very complex i.e. it mixes several alphabets with lo- gograms (kanji) and it doesn’t use explicit word boundaries, we made some restriction on the input form. First we force the user to make ex- plicit where words end and secondly we do not yet support logograms (kanji) because we need a pronunciation database to implement this fea- ture. We supply hints how these limitations can be overcome.
Contents
1 Introduction 4 1.1 The Japanese writing systems . . . . . . . . . . . . . . . . . . . . 4
1.1.1 Rules of the Hepburn romanization system . . . . . . . . 5 1.2 The eSpeak system . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3 Praat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 Literature review 7
3 Theoretical foundation 8 3.1 Phonetic transcription . . . . . . . . . . . . . . . . . . . . . . . . 8 3.2 Place of articulation . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.3 Nasality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.4 Voicing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.5 Phonetic overview of the Japanese language . . . . . . . . . . . . 9
3.5.1 Vowels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.5.2 Voiced and semi-voiced sounds . . . . . . . . . . . . . . . 10 3.5.3 Devoicing . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.5.4 Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.5.5 Palatalised sounds . . . . . . . . . . . . . . . . . . . . . . 11 3.5.6 Moraic nasal n . . . . . . . . . . . . . . . . . . . . . . . . 12 3.5.7 Gemination . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4 Implementation within eSpeak 12 4.1 Word segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . 12 4.2 Pronunciation rules . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.2.1 Normalisation to a single writing system . . . . . . . . . . 13 4.2.2 Text to phoneme translation . . . . . . . . . . . . . . . . 14 4.2.3 Phoneme definitions . . . . . . . . . . . . . . . . . . . . . 15
4.3 Input using Romaji . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.4 Latin characters for abbreviations . . . . . . . . . . . . . . . . . . 18 4.5 Kanji . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5 Results and Evaluation 19
6 Conclusion 20
B IPA for Japanese 24
1 Introduction
We describe an initial implementation of Japanese speech synthesis support for the eSpeak[3] system. This initial implementation will enable future research regarding Japanese phonetics to be carried out more easily. Our implementation described in this paper is aimed at providing assistance during the segmenta- tion process of Japanese speech within the speech analysis system praat[2]. The main focus of this paper is therefore on the correct pronouncing of Japanese characters given the rules of the language, rather than focusing on perfect nat- ural sounding Japanese. Furthermore, this paper will provide an overview on Japanese phonetics and the issues encountered regarding the implementation of Japanese language within the eSpeak system.
1.1 The Japanese writing systems
The Japanese language uses three writing systems; hiragana (), katakana () and kanji (), and to complicate things even further, sometimes even Latin characters are used in Japanese text. The hiragana and katagana writing systems make up the alphabet, covering all the possible sounds in the language. These writing systems have corresponding character sets, where
a i u e o (a) (i) (u) (e) (o)
k (ka) (ki) (ku) (ke) (ko)
a i u e o (a) (i) (u) (e) (o)
k (ka) (ki) (ku) (ke) (ko)
Table 1: Example of hiragana chart (top) and katakana chart (bottom)
each character represents one mora (mora being one sound-unit in the Japanese language). As seen in Table 1 the hiragana and katagana writing both have characters for the same sounds. In the Japanese language these writing sys- tems are used in combination with each other, where one sentence can consist of hiragana, katagana, kanji and even Latin characters. Kanji are Chinese char- acters which are widely used within Japanese texts, when a kanji character is not available for a word, hiragana is often used. Hiragana can also be combined with kanji characters for declensions and conjugations. And Katagana is used to transcribe foreign language words or writing loan words.
Due to the fact that all possible sounds in the Japanese language are covered by the hiragana and katakana writing systems, the pronunciation of kanji (i.e. the Chinese characters) can be written in terms of those characters (e.g. → ). The focus of the current implementation is therefore on being able to pronounce these Japanese characters, with the exception of kanji due to its complexity and lexical dependency in pronunciation (section 4.5). Another sup- ported input method however, is by using romaji () which allows for
4
Japanese input using purely Latin characters. In this paper the modified Hep- burn romanization system will be used for the transcription from hiragana and katagana to romaji and is also the romanization system which is supported by the provided implementation. There are more romanization systems available for the Japanese language, however in addition to the fact that the modified Hepburn romanization system is frequently used, the system is also most ad- justed to English pronunciation which is therefore the most suitable for the eSpeak system.
The full Hiragana1 and Katakana2 charts with romaji transcription using the modified Hepburn romanization system can be seen in the links provided as footnotes.
1.1.1 Rules of the Hepburn romanization system
In order to properly convert Japanese characters to romaji a number of rules must be adhered to. These rules have been compiled in the Hepburn roman- ization system of which there are many versions, this section will discuss the rules that are relevant to this paper. The first rule is that double vowels need to be indicated with a macron or circumflex, whereas /oo/ has a different pro- nunciation as /o/ (see section 3.5.1). With the exception of the vowel ’i’ since /ii/ is always pronounced as a long vowel, however the implemented system of this paper also allows for /i/ as input as this is frequently used for loan-words. As seen in table 3 the vowel combination /ou/ is a special case, whereas the
vowel combination as single long vowel as two separate vowels aa a aa ii i or ii (not possible)
uu u uu ee e ee oo o oo ou o ou
Table 2: Double vowel representation in the modified Hepburn romanization system
pronunciation of ’ou’ can be pronounced as a single long /o/ or as two separate vowels. For example would be transcribed from hiragana as /toukyou/, however the ’ou’ combination is not pronounced as two separate vowels but as a single long ’o’ vowel. Therefore the transcription from to the modified Hepburn romanization system should be /tokyo/. The second rule within the modified Hepburn romanization system is that par- ticles are written as pronounced. For example the subject marker is written as /wa/ (as pronounced) instead of /ha/, which would be the standard reading when the character does not have a grammatical function. The same goes for the particle , which be pronounced as /e/ instead of /he/ and the particle , which would be pronounced as /o/ instead of /wo/. The third rules is that the n syllable should be written as n and as n’ before vowels and y in order
1http://en.wikipedia.org/wiki/Hiragana#Table_of_hiragana 2http://en.wikipedia.org/wiki/Katakana#Table_of_katakana
to disambiguate from sounds from the n row (n-row being the sounds: /na/, /ni/, /nu/, /ne/ and /no/). The syllables /no/ can be read in two different
a i u e o n (na) (ni) (nu) (ne) (no) (n)
Table 3: The ambiguity of the syllable n
ways, as one mora or two as separate mora’s. Meaning that /no/ can be read as (no) or as (n)(o). A good example for when this distinction is required is with the following words, meaning crab and meaning simplicity. Without this distinction both words are written as /kani/, this would be the correct pronunciation for crab, however simplicity should be written as /kan’i/ in order to be pronounced correctly. Double consonants (see section 3.5.7) can be written as expected, however there is one exception that is with ’ch’, which becomes ’tch’. For example becomes matcha instead of maccha. this was also the fourth and final relevant rule of the Hepburn romanization system. So, to summarize, the relevant rules of the Hepburn romanization system are:
1. Long vowel are represented with a macron or circumflex
2. Particles are written as pronounced
3. The syllable n is written as n’ before vowels and the syllable y
4. Gemination with ch becomes tch
1.2 The eSpeak system
eSpeak is an open source Text-To-Speech (TTS) system, which mainly uses formant-based synthesis. Currently, eSpeak supports over 50 languages, how- ever at the start of the project it didn’t support the Japanese language. The speech produced by eSpeak is highly configurable, but is however not as natural sounding as larger synthesisers which use unit-based synthesis and are based on human speech recordings. In formant-based synthesis, voiced speech (e.g. vowels and sonorant consonants) is created by using formants. On the other hand, unvoiced consonants (e.g. /s/) are created by using pre-recorded sounds. Furthermore, voiced consonants (e.g. /z/) are created as a mixture of formant- based voiced sounds in combination with a pre-recorded unvoiced sound. The eSpeak system uses modular language data files, which are easy to understand text files. This way, a language can be added or modified without the need of understanding the underlying source code of eSpeak.
1.3 Praat
Praat is a speech analysis system which is based on the eSpeak text-to-speech engine and therefore uses the language files provided by the eSpeak system. The provided implementation for eSpeak regarding Japanese pronunciation will therefore also be used in the praat system. This speech analysis program will also be used during the evaluation process later on.
6
2 Literature review
There are two types of articles related to this implementation, namely articles about formant synthesis and articles about Japanese phonetics. The first article of interest is the paper from Klatt (1980)[6] which describes software for a cascade/parallel formant synthesiser. This paper gives an in-depth view on how a formant synthesis system is built, which will be useful for this research since the eSpeak program is based on this system. The second paper of interest is the paper from Klatt & Klatt (1990)[7] which is a continuation of the previous paper. This paper provides a view on the analysis and synthesis of different types of voices, with the main focus being the differences between male and female voices which will be helpful for creating more natural sounding synthesis. Although, as previously stated, natural sounding synthesis is not the main focus of this paper, making the implementation sound as natural as possible will contribute to better results during the segmentation process. These articles about formant synthesis are relatively old, however this is due to the fact that current research focuses mainly on unit-based synthesis. Although unit-based synthesis provides a more natural sounding output, it lacks configurability and theoretical knowledge on the sounds to be produced. The actual research conducted on speech synthesis is therefore mainly done on formant synthesis whereas commercial products tend to use unit-based speech synthesis due to the higher quality of the speech output.
Amongst the literature concerning Japanese phonetics is the book by Vance (2008)[9] which provides an in-depth view on Japanese phonological research as well as an insight into the basics of phonological research itself. Another useful book regarding this research is by Kawase et al. (1978)[5] which provides theory on the pronunciation of the Japanese language, where its main focus is on the mouth movements used during pronunciation. More specific papers re- garding Japanese phonetics include the paper by Shigeto (2012/forthcoming)[8] which focuses on the actual duration of double consonants and the paper by Bion et al. (2013)[1] which focuses on the differences of vowel durations in the Japanese Language. The paper by Halpern (n.d.)[4] provides insight in how a phonetic database could help by providing the phonological representation of words, which is essential for natural sounding speech synthesis due to the presence of lexically-dependent pronunciation in the Japanese language. This paper demonstrates an implementation of a phonetic database for the Japanese language and the usage of this database. Although the idea of the phonetic database described in this paper can be used, the actual phonetic database it- self is not available in terms of the General Public License (GPL) which is a requirement for this project.
The idea of this project is to combine these two types of articles by adding Japanese phonetics to a formant synthesis system namely, implementing Japanese speech synthesis support to eSpeak.
7
3 Theoretical foundation
First a number of phonetic terminology and concepts will be described which are used later on in this paper. Afterwards the phonetic aspects of the Japanese language itself will be discussed.
3.1 Phonetic transcription
The International Phonetic Alphabet (IPA) provides a phonetic transcription of speech, this alphabet is used to describe the pronunciation of the language rather than trying to form words within the language. Due to this standardised alphabet a language can be correctly pronounced without the need of knowing the rules of the language. The IPA notation for the Japanese language can be seen in attachment B.
3.2 Place of articulation
Articulation is the process of physically forming the sounds that will result in the pronunciation of a word. This process uses various body parts which are divided into active articulators and passive articulators. Active articulators are generally identified as the articulators that move during the formation of speech. Examples of active articulators include the tongue and the lower lip. In contrast to this, the passive articulators make little to no movement during this process. Examples of passive articulators include the upper lip, upper teeth and the roof of the mouth. The position of these articulators will define the resulting speech.
3.3 Nasality
Nasality refers to the effect of the velum in the articulation of consonants. An open velum (see Figure 1) allows for air to escape through the nasal cavity (inner nose), whereas a closed velum causes that air to escape only through the oral cavity (inner mouth). Therefore the meaning of a consonant being ’nasal’ would be that while articulating the consonant the velum is open, which allows for air to escape through the nose.
3.4 Voicing
Voicing is dependent on the glottis, which refers to both the vocal cords and the open space between them. This (small) opening allows for the vocal cords to vibrate which results in the voiced sound. The opening in the glottis can also be wide, whereas air can pass through freely and the vocal folds have reduced vibration. This is the cause of the so called voiceless sounds. As example take the voiceless consonant s, when the s is pronounced you can’t feel any vibrations in the vocal folds, where in contrast to this the voiced consonant z produces vibrations which can be felt. Furthermore the glottis can also be closed, in which case no air can pass. The sound which is produced by obstructing the airflow by closing the glottis is called a glottal stop.
8
3.5.1 Vowels
Articulators alter the vocal resonances which results in the formation of vowel sounds. Peaks amongst the spectra of vowel sounds are called vocal formants. These vocal formants are extremely useful to, for example, distinguish between individual vowel sounds. Distinguishing vowel sounds can be done by comparing the formants, in which case the first two formants tend to be sufficient for the task. This is also used in the implementation within eSpeak which will be discussed later on (see section phoneme definitions).
Another important aspect of vowels (especially in the Japanese language) is length, where the meaning of a word can depend on the length of a vowel. Take for instance the words () which is read as /yuki/ meaning snow and () read as /yu:ki/ and means courage. Here the u (transcribed as u:) is a long vowel where the meaning of the word changes due to the length of the vowel. Furthermore there is another thing to consider when talking about about long vowels, that is the question how the double vowels (e.g. uu, ee, ii, aa, uu) should be pronounced. This due to the fact that these double vowels can be read in two different ways, as two separate vowels or as a single one long vowel. The difference between the pronunciation of a single one long vowel and two short vowels can be clearly seen in words like /satooya/ and /sato:ya/. In figure 2 the pronunciation difference is clearly visible, whereas /satooya/ is clearly pronounced with two separate vowels, which can be seen by the drop between the vowels (i.e. where the arrows point to). Although a good estimation can be given on the pronunciation of double vowels (i.e. by checking the word boundaries of kanji’s within a word) this would require a lexical analysis system. Unfortunately such a system is not yet available for this project and therefore automatically checking on vowel combinations is out of the scope of this paper. Instead we force the user to make the input unambiguous with regard to double vowels by using a so called prolonged sound mark (), which is already the standard way to explicitly indicate long vowels in Japanese text (e.g. ).
9
Figure 2: Long and short Vowel distinction (taken from: Vance(2009))
3.5.2 Voiced and semi-voiced sounds
As previously described (section 3.4) a consonant is voiced when the vocal cords are vibrating during pronunciation process, whereas if a consonant is voiceless the vocal cords are not vibrating during pronunciation. In the Japanese writing system this indication of whether or not the character is voiced is marked in the top right corner of a character with a so called dakuten (). For example in the character (sa) the first syllable is voiceless, however if we were to add the dakuten to this character making it (za), the first syllable of this character would be voiced. Adding a dakuten is possible for the characters from the k-row (becomes g-row), s-row (becomes z-row), t-row (becomes d-row) and the h-row (becomes b-row).
It is also possible to make characters semi-voiced by adding a so called handakuten () to the character. If we for example take (ha) and add a han- dakuten to this character it becomes(pa), whereas if were to add the dakuten this character it would have become (ba). The addition of handakuten to make a character semi-voiced is only possible for characters from the b-row (or h-row) as /b/ is the semi-voiced counterpart of /p/.
3.5.3 Devoicing
We have already seen the difference between voiced and voiceless consonants (e.g. z and s), however it is also possible to have devoiced vowels. Although no actual sound is produced, the mouth moves in the direction of the vowel. In the Japanese language there are a couple of rules for when devoicing takes place, for example when the vowels ’i’ or ’u’ are between unvoiced phonemes these vowels are devoiced. Take for example the word /sukiyaki/ here the vowel
10
u is between the unvoiced syllable ’s’ and the unvoiced syllable ’k’ which causes the vowel u to devoice. Therefore the word will be phonetically transcribed as /su0kiyaki/ where as the ’u’ is devoiced. There is another rule where there is a high probability that the vowels u and i are preceded by an unvoiced consonant and are immediately followed by an pause. However since there is not a certainty that this is always the case, a lexical analysis system is needed in order to verify in which cases this is true. Vance(1990) also states the following about the devoicing of vowels:
”When the /su/ is the last syllable of a polite nonpast verb form or the polite nonpast copula /desu/ and immediately followed by a pause, devoicing is quite consistent for most Tokyo speakers.” Vance(1990)
This sounds like a lexical analysis system is needed in order to find the polite nonpast verbs, however this is not the case as will be shown in the implementa- tion section, this because there is a small rule that can be implemented which can find the polite nonpast verbs without the need of a lexical analysis system. However for this Vance(1990) also comes with an additional exception:
”Vowel devoicing also interacts with intonation in an obvious way. If the last syllable in a sentence contains a short high vowel preceded by a voiceless consonant but has to carry the intonation for a question, the vowel doesn’t devoice.” Vance(1990)
What this means is that when there is a question which is indicated with a question mark (i.e. there is a rising pitch) the last syllable will not be devoiced anymore, therefore the need for an audible rise overrides the devoicing rule.
Futhermore, when the vowel u is devoiced and the preceding syllable is an ’s’, the pronunciation for the devoiced ’u’ will then be taken by the pronunciation of the syllable ’s’ which will then have a duration of two mora’s. Therefore /desu/ will be pronounced as /dess/ and phonetically transcribed as /desu0/ (u0 standing for a devoiced vowel u).
3.5.4 Particles
When a character has a grammatical function (a so called particle), the pro- nunciation of the character can change from its original reading. Take for in- stance the Japanese sentence ’’ (which translates to ’this is Japanese’), the character is normally pronounced as /ha/, however since this character has the grammatical function of being a particle (i.e. it addresses the subject in the sentence) it is pronounced as /wa/ (otherwise written as). The same goes for the particle, which would be pronounced as /e/ instead of /he/ and the particle , which would be pronounced as /o/ instead of /wo/.
3.5.5 Palatalised sounds
The palatalised sounds (as shown in the full hiragana and katagana charts) use a consonant-semivowel-vowel syllable structure, whereas the semivowel is a palatal approximant (written as ’y’ but phonetically transcribed as /j/). The semivowel-vowel part can consist of three different characters(ya),(yu) and
11
(yo), whereas during the articulation of the consonant the tongue is raised toward the hard palate and the alveolar ridge.
The pronunciation of palatalised sounds goes as following, (ki) + (ya) →(kya), note that this is not pronounced as /kiya/ but as /kya/. Further- more, notice that when working with palatalised sounds like these the (ya) character is written smaller than the normal (ya) character (and the same goes for the (yu) and (yo) characters).
3.5.6 Moraic nasal n
The moraic nasal n has an articulation which is dependent on which syllable follows, for this the place of articulation is altered depending on the following syllable. This gives us the following articulation rules for the syllable n (as taken from wikipedia3):
1. uvular [ð] at the end of utterances and in isolation.
2. bilabial [m] before [p], [b] and [m]
3. dental [n] before coronals /d/, /t/, and /n/
4. velar [N] before [k] and [g].
3.5.7 Gemination
In the Japanese language a double consonant is indicated by a so called ’sokuon’, which is presented as a small tsu character and is both available in the hiragana () and katakana () wrtiting systems, whereas the sokuon copies the next following syllable. See for example the word (kekka) meaning result, the sokuon in this word precedes the syllable (ka) which starts with the syllable ’k’. The sukuon character therefore also becomes the letter k making the double consonant. There are however a few exception in which the sokuon does not become the first following letter, that is with ’ch’, which becomes ’tch’, for example becomes matcha instead of maccha.
This sokuon used for gemination can also be used at the end of a sentence, which will indicate a glottal stop.
4 Implementation within eSpeak
4.1 Word segmentation
The current implementation within eSpeak requires the user to segmentate the input sentence themselves, this due to the absence of a lexical analysis system (see section future work). For this all words but also particles needs to be sep- arated with a space. A sentence like ’’ should therefore be inserted into the system as ’ ’.
3http://en.wikipedia.org/wiki/Japanese_phonology#The_moraic_nasal_.2F.C9.B4.2F
4.2 Pronunciation rules
The eSpeak system uses two kinds of text files to implement the pronunciation rules, the first file is the * rules file (in this case ja rules since we are working with the Japanese language) which contain the actual pronunciation rules and the second file is the * list file (ja list in our case) which contains a lookup dictio- nary. The following rules are implemented into the ja rules file unless otherwise notified. In order to correctly pronounce Japanese text there are three main steps need to be taken (the first step does not comply to romaji input):
1. Normalising to a single writing system
2. Text to phoneme translation
3. Describing Japanese phonemes
4.2.1 Normalisation to a single writing system
Due to the fact that every sound in Japanese language can be written in terms of hiragana characters. The first step is by normalising the input sentence to a single writing system (in this case the hiragana writing system), for katagana and half-width katagana this can done with a straightforward replacement rule. This replacement rule is possible since there is a one on one conversion possi- ble between the hiragana and katakana writing systems. The replace function within eSpeak works as follows:
.replace a b
Where ’a’ will be replaced with ’b’. Each line specifies either one or two alphabetic characters to be replaced by another one or two alphabetic char- acters. This substitution is done before the text to phoneme translation. The katakana characters will therefore be placed on the ’a’ side, whereas the hiragana characters will be placed on the ’b’ side.
.replace . . . . . .
Palatalised sounds (e.g. (kya)) can be divided into two separate charac- ters, the (ki) character and the small (ya) character ((ki) → (ki) and (ya) → (ya) therefore (kya) → (kya)), where the palatalisation rules text to phoneme conversion still apply (as this rule is the same in hiragana as katagana).
Half-width katakana (katakana but written smaller) however has a small thing to take into consideration, that is that the (ga) character consist of 2 parts the (ka) and the voicing syllable . The replacement function will convert the first thing it matches, therefore if the character (ka) is placed before (ga) the replace function will convert the half-width katakana ga to the hiragana character ka and leaving the voicing mark unparsed.
13
4.2.2 Text to phoneme translation
The text to phoneme translation contains the actual parsing of the pronunciation rules. These rules are given in groups which is made for each letter or character4, these rules are used by the eSpeak system to try to find the best fit for each found character of the input sentence. Every rule is given on a separate line and has the following syntax: [< pre >)] < match > [(< post >] < phonemestring > This can be best explained with a small example:
.group a a:
The first line creates the group for the character (a), in this case if the eS- peak system now encounters the syllable (a) it will be handled by this group. Now that the group is known, pronunciation rules are needed on what to do when this group is encountered, in this case we want to make the character → ’a’ and → ’a:’. The < match > section of the rule is therefore the hiragana character and this will be converted to the phoneme ’a’ (i.e. the < phonemestring > in this case is simply ’a’). However, if long vowel is in- serted () this should be translated into ’a:’. In order to do this, a second rule is added into the group, that is when is found in the input sentence this will be a better match and therefore tranlating (< match >) → ’a:’ (< phonemestring >).
The support for palatalised sounds is done in a similar way, in this case the group of the first character will be used (in this case ) and in the rules of this group we added a match for the palatalised sounds with the corresponding phoneme string (In this case (< match >) → ’kya’ (< phonemestring >)).
.group ki ki: kya kya: . . . . . .
Gemination (see section 3.5.7) is indicated by a small tsu character (in hi- ragana ) called the sokuon, however on its own it does not have a reading. Therefore, a check is needed on which syllable follows, this can be done by using the (< post > section of the rule. This section can provide rules such as, if the next character starts with the syllable ’k’, change the sokuon character to that syllable. This is also exactly what has been done in this implementation, how- ever the syllable starting with the same syllable have been grouped together in order to make it more efficient and more readable. This grouping has be done as following:
.L01 //starts with k
.L02 //starts with g
.L03 //starts with s
. . .
Where L01, L02, etc. defines a group of letter sequences (in this case the hira- gana characters start with the same syllable). The text to phoneme translation for the sokuon therefore becomes:
.group (L01 k (L02 g (L03 s . . . ?
Notice that the rules now have a ’(< post >’ section meaning that one of these characters in the specified group is must after the sokuon in order for the rule to be true. This makes the phoneme translation from the sokuon to the specified syllable in the < phonemestring >. Another rule in the sokuon pronunciation is that when the sokuon is at the end of a word this is pronounced as a glottal stop (e.g. ) indicated with a question mark. The ’(< post >’ section of the rule ’( ’ means that the next syllable is a pause or a hyphen, which in this case means that the sokuon must be at the end of a word.
The rule that has been implemented to devoice the last syllable on polite nonpast verbs is based on the fact that those verbs always end on masu (with exception of the word desu). Therefore our rule simply puts the whole word ’masu’ in the < match > section where as the last syllable (u) must be at the end of the word (the ( rule in the ’(< post >’ section). In doing so all polite non-past verbs are found and correctly devoiced, the exception case ’desu’ is handled in the same manner with the addition of the ) rule in the ’(< pre >’ section.
4.2.3 Phoneme definitions
Defining the correct Japanese phonemes must be done in the phoneme file dedi- cated to the Japanese language (i.e. the ph japanese file). In this file, a phoneme table for the Japanese language must be given, if however a phoneme is not de- fined in the phoneme table, but is use in the rules files (i.e. the ja rules file) the eSpeak system will use its base phoneme. In these phoneme definition attributes like the correct IPA notation, place of articulation and the reference to the cor- responding ’sound files’ for that specific phoneme can be given. For a detailed explanation on the rules and possibilities within these definitions please refer to the documentation on the eSpeak page5. It is for example also possible to im- port phonemes from other files by using the import phoneme statement which can be used to copy a previously defined phoneme from a specified phoneme table. The phoneme table contains a list of phoneme definitions, which has the following structure:
phoneme u
. . . NOT thisPh(isWordEnd) AND thisPh(notWordStart) THEN
5for more information see http://espeak.sourceforge.net/phontab.html
ChangePhoneme(u0) ENDIF vowel starttype #u endtype #u length 83 FMT(vowel/uu bck)
endphoneme
In this case the phoneme u is defined for the Japanese language, in this phoneme description the correct IPA notation, as well as the correct length and the ’sound’ for that phoneme is given. The sound file which is referenced to has been chosen relative to the formants of the Japanese vowel u and is available in the eSpeak sound database. The starttype and endtype allocates the phonemes to groups so that functions can be tested on groups of phonemes. As can be seen it is also possible to add if statements to the phoneme descriptions, in this case, if the phoneme u is between voiceless phonemes and is not at the beginning nor at the end of a word, this phoneme will be changed to the u0 phoneme (which devoices the vowel). This because the ChangePhoneme(u0) function changes the current phoneme to the phoneme u0 which is a devoiced vowel u
phoneme u0
endphoneme
Here can be seen that the phoneme how no sound defined, which is the cause of the phoneme being devoiced. However if the preceding phoneme is the phoneme s, the s sound is given for the phoneme u. This due to the rule that the syllable s takes over the the unvoiced u, when the phoneme u is devoiced and the preceding phoneme is the s phoneme.
The use of the function ChangePhoneme is also used to solve the problem of the moraic nasal n as was earlier described in this paper. For this, the different phonemes descriptions for the articulations of the syllable n (where the articulation point differs) have been imported from the ph consonant file. Rules have then been added in the form of:
IF nextPhW(b) OR nextPhW(b) OR nextPhW(b) THEN ChangePhoneme(m)
ENDIF
Whereas now the correct phoneme definitions are used for the different types of articulation of the syllable (n).
An easier phoneme description, is the phoneme description describing the long vowel, which only (with respect to the previously shown phoneme u) changes in length:
phoneme u:
ipa : vowel starttype #u endtype #u length 153 FMT(vowel/uu bck)
endphoneme
It is also possible to simply call other phonemes, for example this is the case with the y phoneme which is phonetically transcribed as /j/:
phoneme y ipa j CALL base/j
endphoneme
4.3 Input using Romaji
Due to the fact that the modified Hepburn romaji writing system is already optimised for English pronunciation, only straightforward rules have to be im- plemented. For example long vowels which are inputted as a, i, u, e and o, have groups like the following:
.group a a a
.group a a a:
One thing to take in to consideration however is that phonemes with two (or more) syllables need to be explicitly denoted in the rules. Take for instance the phoneme ch, when we only have rules for ’c’ and ’h’ the eSpeak system will use two separate phonemes c and h instead of the desired phoneme ch. The fix for this will be to add the ch combination to the c group.
.group c c c ch ch
The moriac nasal n is another example which needs some extra care. As explained previously in this paper (see section hepbrun), the character n needs to be disambiguated from characters in the n-row. The rules of the modified Hepburn romanization stated that only when the n is preceded by a vowel or y, should the character n be written as n’. This is due to the fact that when the syllable n is preceded by a consonant it must be the character . Because of the fact that characters in the n-row are always followed by a vowel. This is also true for when the syllable n is at the end of a word, this will give us the following rules in order to disambiguate the character from the n-row:
.group n n n n (C N n ( N n’ N
Where C stands for any arbitrary consonant, N is the phoneme for the character and ( means that n is at the end of a word.
17
4.4 Latin characters for abbreviations
In Japanese the capitalized Latin characters are used for abbreviations. Some frequently used ones are JR and NHK, these abbreviations in Latin charac- ters are used next to the Japanese characters. The capitalized latin charac- ters are supposed to be pronounced as isolated English characters. However, the pronunciation is changed to be able to be pronounced within the available Japanese characters. For example, JR is pronounced as // which is /jeia:ru/. Therefore rules of the following form are required:
.group J
J jei
.group R
R a:ru
However, the current version of eSpeak automatically decapitalises and due to the fact that we also allow uncapitalized Latin characters (the modified hepburn romanization system) there are already groups of the following form:
.group j
j j
.group r
r r
Due to this (and the standardization to uncapitalized characters), the current implementation pronounces JR as /jr/ instead of /jeia:ru/. If however, at a later point it is possible to turn off automatic decapitalisation for the Japanese language, this implementation will be sufficient.
4.5 Kanji
As previously stated, the current implementation does not support kanji. How- ever there is in eSpeak a way to normalise kanji into hiragana, namely by using the look-up dictionary file (ja list). The problem with this however, is that all the possible kanji combinations and readings of the kanji need to inserted into this file, which does not seem like an optimal solution. However to illustrate how this would work, I added a single kanji combination () into the look-up dictionary file.
$textmode
The ’$textmode’ indicates that a conversion between text to text is wanted instead of the also possible text to phoneme conversion. When this kanji () is given as input it will be normalised to hiragana and then get parsed by the system like any other hiragana input.
18
5 Results and Evaluation
The best way to get an idea of the actual results of this implementation, would be by simply listening to the output of the speech synthesiser, which was also the main evaluation method during each new implementation within the system. However, also the parsing of Japanese text was a big part of this project. For this our main focus was that all the syllables should be pronounced correctly (i.e. have the correct translation from Japanese text to phoneme transcription). For this we tested with the focus on gemination of syllables, the moriac nasal n and the right IPA notations given an input sentence (which would indicate that the right phonemes are used) as well as the normalisation from katakana and half-width katakana to hiragana, in which all conversions to hiragana are now done correctly. For this random words with the focus on the previous stated issues were inserted, given the knowledge that the system possesses on the rules of the pronunciation of Japanese characters the system transcribed the words correctly, however at this point of time not all the pronunciation rules of the language could be implemented. The main reason for this is the fact that this system does not yet have a lexical analysis system.
The actual use of this implementation will be during the automatic segmen- tation process within praat. Although this implementation regarding Japanese language support is during this evaluation not yet supported by praat, this feature will be available at the time this implementation becomes available for eSpeak. Given is a manually segmented output of the system, when comparing this analysis with my own pronunciation of this Japanese text, many specific similarities can be found which gives high hopes for the dynamic time warping algorithm which is used during the segmentation process.
19
6 Conclusion
The provided implementation is capable of pronouncing hiragana, katakana and romaji using the hepburn romanization system. However the implementation still comes with restrictions to the user, which are unnatural. For example the need to write tokyo like // instead of // or even as / /. The main goal of this paper was however to provide assistance during the segmentation process of praat, where there is a high probability such a situation is the case. On the other hand this implementation already provides a good starting point for further research regarding the implementation of Japanese speech synthesis support within eSpeak, where this implementation could be used as a start-up block.
7 Future work
The current implementation has its main focus on being able to correctly pro- nouncing Japanese characters and using this during the segmentation process in praat. However this implementation comes with various restrictions, by which the user has to alter the input in order to get correct pronunciation. This of course is not an optimal solution and in order to overcome these restriction some additional processing of the input is needed. There are two main component that could help overcome these restrictions and can provide to a more natural input, whereas any Japanese sentence (with kanji etc.) can be processed by the system. The first component is a lexical analyse system which can parse kanji and is capable of providing a grammatical analyse (i.e. show which characters are particles). This type of system should also be able to normalise the kanji to the hiragana writing system (or even romaji), which then can be parsed by the already present implementation for eSpeak. Various lexical analyse systems are already available for the Japanese language, however needed is a system which works offline and is in terms with the General Public Licence (GPL).
Furthermore the Japanese language uses a pitch accent, which is not yet implemented in this current implementation. For this a phonetic database could be used, like the database described by Halpern (n.d.), where a so called binary pitch is used and the pitch is presented with a two-pitch-level model. In this model a pitch can be either high (H) or low (L), and each mora (for a specific word) has its own pitch level. This implementation of speech is needed due to presence of words that only differ in accent, but also to make the system more natural sounding.
With these additional systems the input sentence can be normalised to hi- ragana characters, which the current implementation is already capable of pro- nouncing. With this the current restrictions (e.g. particles) can be solved making the input more natrual.
7.1 eSpeak functionality
Additional functionality needs to be implemented within the eSpeak system in order to be able to correctly pronounce Japanese text are the following: In the rules (i.e. ja rules) it must be possible to check for a question mark in order to be able to implement the exception where the rise in pitch overrides the devocing
20
process. Also due to the automatic decapitalisation of the input text, it is not yet possible to have to both support romaji as capatalised latin characters for abbreviations. This due to the fact that the capatalised latin characters will be automatically decapatalised and will be parsed as romaji syllables instead of correctly pronouncing the abbreviations.
21
References
[1] Ricardo A. H. Bion, Kouki Miyazawa, Hideaki Kikuchi, and Reiko Mazuka. Learning phonemic vowel length from naturalistic recordings of japanese infant-directed speech. PLoS ONE, 8(2):e51594, 02 2013.
[2] Paul Boersma and David Weenink. Praat: doing phonetics by computer, 2013.
[3] Jonathan Duddington. espeak text to speech, 2013.
[4] Jack Halpern. The role of phonetics and phonetic databases in japanese speech technology.
[5] I. Kawase, M. Sugihara, and Kokusai Koryu Kikin. Nihongo, the pronunci- ation of Japanese. Japan Foundation, 1978.
[6] Dennis H. Klatt. Software for a cascade/parallel formant synthesizer. 67:971–995, 1980.
[7] Dennis H. Klatt and Laura C. Klatt. Analysis, synthesis, and perception of voice quality variations among female and male talkers. 87:820–857, 1990.
[8] Kawahara Shigeto. The phonetics of obstruent geminates, sokuon. 2012/forthcoming.
[9] T.J. Vance. The Sounds of Japanese with Audio CD. Cambridge University Press, 2008.
22
A How to use this initial implementation
Currently, the implementation allows for hiragana and katagana characters but also input using the modified Hepburn romanization system. However, due to the fact that there is at this point no lexical analysis is available available, some ambiguity in pronunciation occurs. In order to overcome these ambiguities we force the user to make the input unambiguous with regard to pronunciation.
1. The system request the user to added word segmentation (section 4.1)
2. Particles need to written as pronounced (section 3.5.4)
3. Long vowels need to written with the prolonged sound mark (section 3.5.1)
Therefore a sentence like: // (this is tokyo) could be inserted as / / where there is no kanji, the user provides word segmentation and long vowels are explicitly written with a prolonged sound mark. However it is also possible to input the sentence using the modified hep- brun romanization system: /kore wa tokyo desu/. Which has next to the rules of the modified hepbrun romanization system no additional restrictions.
23
IPA for Japanese as found on wikipedia6
IPA Japanese example English approximation b basho bog ç hito hue C shita, shugo sheep d domo dome
dz, z zutto rods, zen dý, ý jibun, goju jeep, garagist
F fugu who g gakusei gape h hon hone j yakusha yak k kuru skate m mikan much n natto not ð nihon long N ringo, rinku finger, pink p pan span R roku close to /t/ in auto in American English s suru sue t taberu stop ts tsunami cats tC chikai, kincho itchy w wasabi was P (in Ryukyu languages) uh-oh! a aru roughly like father e eki roughly like met i iru need i
yoshi, shita (almost silent) o oniisan roughly like sore
unagi roughly like foot
desu, sukiyaki (almost silent)
The eSpeak system
Vowels
Text to phoneme translation
IPA for Japanese