Pub. online:1 Jan 2018Type:Research ArticleOpen Access
Journal:Informatica
Volume 29, Issue 3 (2018), pp. 487–498
Abstract
The problem of speech corpus for design of human-computer interfaces working in voice recognition and synthesis mode is investigated. Specific requirements of speech corpus for speech recognizers and synthesizers were accented. It has been discussed that in order to develop above mentioned speech corpus, it has to consist of two parts. One part of speech corpus should be presented for the needs of Lithuanian text-to-speech synthesizers, another part of speech corpus – for the needs of Lithuanian speech recognition engines. It has been determined that the part of speech corpus designed for speech recognition engines has to ensure the availability to present language specificity by the use of different sets of phonemes. According to the research results, the speech corpus Liepa, which consists of two parts, was developed. This speech corpus opens possibilities for cost-effective and flexible development of human-computer interfaces working in voice recognition and synthesis mode.
Journal:Informatica
Volume 25, Issue 1 (2014), pp. 55–72
Abstract
Lithuanian vowel and semivowel phoneme modelling framework is proposed. Using this framework, the phoneme signal is described as the output of a linear multiple-input and single-output (MISO) system. The MISO system is a parallel connection of single-input and single-output (SISO) systems whose input impulse amplitudes vary in time. Within this framework two synthesis methods are proposed: harmonic and formant. The synthesized sounds obtained by the harmonic synthesis method are compared with those obtained by the formant method. Application of this modelling framework to all of Lithuanian vowel and semivowel synthesis gives naturally sounding result.
Journal:Informatica
Volume 22, Issue 3 (2011), pp. 411–434
Abstract
The goal of the paper is to get a method of Lithuanian speech diphthong modelling. We use a formant-based synthesizer for this modelling. The second order quasipolynomial has been chosen as the formant model in time domain. A general diphthong model is a multi-input and single-output (MISO) system, that consists of two parts where the first part corresponds to the first vowel of the diphthong and the second one – to the other vowel. The system is excited by semi-periodic impulses with a smooth transition from one vowel to the other. We derived the parametric input-output equations in the case of quasipolynomial formants, defined a new notion of the convoluted basic signal matrix, derived parametric minimization functional formulas for the convoluted output data. The new formant parameter estimation algorithm for convoluted data, based on Levenberg–Marquardt approach, has been derived and its stepwise form presented. Lithuanian diphthong /ai/ was selected as an example. This diphthong was recorded with the following parameters: PCM 48 kHz, 16 bit, stereo. Two characteristic pitches of the vowels /a/ and /i/ have been chosen. Equidistant samples of these pitches have been used for estimating parameters of MISO formant models of the vowels. Transition from the vowel /a/ to the vowel /i/ was achieved by changing excitation impulse amplitudes by the arctangent law. The method was audio tested, and the Fourier transforms of the real data and output of the MISO model have been compared. It was impossible to distinguish between the real and simulated diphthongs. The magnitude and phase responses only have shown small differences.
Journal:Informatica
Volume 12, Issue 3 (2001), pp. 477–486
Abstract
One of speech synthesis main problems is synthesis of unvoiced fricatives. One of our previously stated conclusions is that consonant x is influenced by before and behind existing phonetic elements. The aim of experiments described in this paper is to evaluate influence of different x allophones for speech intelligibility and automatic speech recognition.
In this paper the formal system, which describes allophones and, at the same time, phonemes interrelations in their possible sequences in natural language, is described. The formal system is necessary for automatic speech synthesis questions' solution. The experiments of two different types were carried out in order to evaluate the resemblance between two different ωx allophones: a) ωx allophones resemblance analysis based on expert evaluation; b) ωx allophones resemblance analysis based on automatic speech recognition results evaluation.
Experiment's results corroborated that ch allophones differ and depend from the context, i.e., from neighboring vowels, different ch allophones have influence on speech intelligibility, and therefore different ch allophones for high quality speech must be synthesized.