Pub. online:1 Jan 2019Type:Research ArticleOpen Access
Journal:Informatica
Volume 30, Issue 4 (2019), pp. 711–728
Abstract
The primitive of certificateless signature, since its invention, has become a widely studied paradigm due to the lack of key escrow problem and certificate management problem. However, this primitive cannot resist catastrophic damage caused by key exposure. Therefore, it is necessary to integrate revocation mechanism into certificateless signature. In this paper, we propose a new certificateless signature scheme with revocation (RCLS) and prove its security under the standard model. In the meanwhile, our scheme can resist malicious-but-passive Key Generation Center (KGC) attacks that were not possible in previous solutions. The theoretical analysis shows our scheme has high efficiency and practicality.
Pub. online:1 Jan 2019Type:Research ArticleOpen Access
Journal:Informatica
Volume 30, Issue 4 (2019), pp. 689–710
Abstract
Construction site selection is a complex problem involving many alternatives and conflicting criteria with vague and imprecise evaluations. Fuzzy multi-criteria decision-making methods are the most effective tools to obtain optimum solutions under possibilistic uncertainty. In this paper, a novel interval hesitant fuzzy CODAS method is proposed and applied to a residential construction site selection problem. A comparative analysis with ordinary fuzzy CODAS method is applied for validating the proposed method. Also, a sensitivity analysis is conducted for the stability of the ranking results of the interval hesitant fuzzy CODAS method. The results of the analyses demonstrate the effectiveness of our proposed method.
Pub. online:1 Jan 2019Type:Research ArticleOpen Access
Journal:Informatica
Volume 30, Issue 4 (2019), pp. 671–687
Abstract
The present research shows the implementation of a virtual sensor for fault detection with the feature of recovering data. The proposal was implemented over a bicomponent mixing machine used for the wind generator blades manufacture based on carbon fiber. The virtual sensor is necessary due to permanent problems with wrong sensor measurements. The solution proposed uses an intelligent model able to predict the sensor measurements, which are compared with the measured value. If this value belongs to a specified range, it is valid. Otherwise, the prediction replaces the read value. The process fault detection feature has been added to the proposal, based on consecutive erroneous readings, obtaining satisfactory results.
Pub. online:1 Jan 2019Type:Research ArticleOpen Access
Journal:Informatica
Volume 30, Issue 4 (2019), pp. 647–670
Abstract
A major challenge in face recognition is handling large pose variations. Here, we proposed to tackle this challenge by a three step sparse representation based method: estimating the pose of an unseen non-frontal face image, generating its virtual frontal view using learned view-dependent dictionaries, and classifying the generated frontal view. It is assumed that for a specific identity, the representation coefficients based on the view dictionary are invariant to pose and view-dependent frontal view generation transformations are learned based on pair-wise supervised dictionary learning. Experiments conducted on FERET and CMU-PIE face databases depict the efficacy of the proposed method.
Pub. online:1 Jan 2019Type:Research ArticleOpen Access
Journal:Informatica
Volume 30, Issue 4 (2019), pp. 629–645
Abstract
Machine Translation has become an important tool in overcoming the language barrier. The quality of translations depends on the languages and used methods. The research presented in this paper is based on well-known standard methods for Statistical Machine Translation that are advanced by a newly proposed approach for optimizing the weights of translation system components. Better weights of system components improve the translation quality. In most cases, machine translation systems translate to/from English and, in our research, English is paired with a Slavic language, Slovenian. In our experiment, we built two Statistical Machine Translation systems for the Slovenian-English language pair of the Acquis Communautaire corpus. Both systems were optimized using self-adaptive Differential Evolution and compared to the other related optimization methods. The results show improvement in the translation quality, and are comparable to the other related methods.
Pub. online:1 Jan 2019Type:Research ArticleOpen Access
Journal:Informatica
Volume 30, Issue 3 (2019), pp. 613–628
Abstract
Fuzzy c-means (FCM) is a well-known and widely applied fuzzy clustering method. Although there have been considerable studies which focused on the selection of better fuzzifier values in FCM, there is still not one widely accepted criterion. Also, in practical applications, the distributions of many data sets are not uniform. Hence, it is necessary to understand the impact of cluster size distribution on the selection of fuzzifier value. In this paper, the coefficient of variation (CV) is used to measure the variation of cluster sizes in a data set, and the difference of coefficient of variation (DCV) is the change of variation in cluster sizes after FCM clustering. Then, considering that the fuzzifier value with which FCM clustering produces minor change in cluster variation is better, a criterion for fuzzifier selection in FCM is presented from cluster size distribution perspective, followed by a fuzzifier selection algorithm called CSD-m (cluster size distribution for fuzzifier selection) algorithm. Also, we developed an indicator called Influence Coefficient of Fuzzifier ($\mathit{ICF}$) to measure the influence of fuzzifier values on FCM clustering results. Finally, experimental results on 8 synthetic data sets and 4 real-world data sets illustrate the effectiveness of the proposed criterion and CSD-m algorithm. The results also demonstrate that the widely used fuzzifier value $m=2$ is not optimal for many data sets with large variation in cluster sizes. Based on the relationship between ${\mathit{CV}_{0}}$ and $\mathit{ICF}$, we further found that there is a linear correlation between the extent of fuzzifier value influence and the original cluster size distributions.
Pub. online:1 Jan 2019Type:Research ArticleOpen Access
Journal:Informatica
Volume 30, Issue 3 (2019), pp. 595–612
Abstract
Certificate-based cryptography (CB-PKC) is an attractive public key setting, which reduces the complexity of public key infrastructure in traditional public key settings and resolves the key escrow problem in ID-based public key settings. In the past, a large number of certificate-based signature and encryption schemes were proposed. Nevertheless, the security assumptions of these schemes are mainly relied on the difficulties of the discrete logarithm and factorization problems. Unfortunately, both problems will be resolved when quantum computers come true in the future. Public key cryptography from lattices is one of the important candidates for post-quantum cryptography. However, there is little work on certificate-based cryptography from lattices. In the paper, we propose a new and efficient certificate-based signature (CBS) scheme from lattices. Under the short integer solution (SIS) assumption from lattices, the proposed CBS scheme is shown to be existential unforgeability against adaptive chosen message attacks. Performance comparisons are made to demonstrate that the proposed CBS scheme from lattices is better than the previous lattice-based CBS scheme in terms of private key size and signature size.
Pub. online:1 Jan 2019Type:Research ArticleOpen Access
Journal:Informatica
Volume 30, Issue 3 (2019), pp. 573–593
Abstract
Conventional large vocabulary automatic speech recognition (ASR) systems require a mapping from words into sub-word units to generalize over the words that were absent in the training data and to enable the robust estimation of acoustic model parameters. This paper surveys the research done during the last 15 years on the topic of word to sub-word mappings for Lithuanian ASR systems. It also compares various phoneme and grapheme based mappings across a broad range of acoustic modelling techniques including monophone and triphone based Hidden Markov models (HMM), speaker adaptively trained HMMs, subspace gaussian mixture models (SGMM), feed-forward time delay neural network (TDNN), and state-of-the-art low frame rate bidirectional long short term memory (LFR BLSTM) recurrent deep neural network. Experimental comparisons are based on a 50-hour speech corpus. This paper shows that the best phone-based mapping significantly outperforms a grapheme-based mapping. It also shows that the lowest phone error rate of an ASR system is achieved by the phoneme-based lexicon that explicitly models syllable stress and represents diphthongs as single phonetic units.
Pub. online:1 Jan 2019Type:Research ArticleOpen Access
Journal:Informatica
Volume 30, Issue 3 (2019), pp. 553–571
Abstract
The simplest hypothesis of DNA strand symmetry states that proportions of nucleotides of the same base pair are approximately equal within single DNA strands. Results of extensive empirical studies using asymmetry measures and various visualization tools show that for long DNA sequences (approximate) strand symmetry generally holds with rather rare exceptions. In the paper, a formal definition of DNA strand local symmetry is presented, characterized in terms of generalized logits and tested for the longest non-coding sequences of bacterial genomes. Validity of a special regression-type probabilistic structure of the data is supposed. This structure is compatible with probability distribution of random nucleotide sequences at a steady state of a context-dependent reversible Markov evolutionary process. The null hypothesis of strand local symmetry is rejected in majority of bacterial genomes suggesting that even neutral mutations are skewed with respect to leading and lagging strands.
Pub. online:1 Jan 2019Type:Research ArticleOpen Access
Journal:Informatica
Volume 30, Issue 3 (2019), pp. 529–552
Abstract
A standard problem in certain applications requires one to find a reconstruction of an analogue signal f from a sequence of its samples $f{({t_{k}})_{k}}$. The great success of such a reconstruction consists, under additional assumptions, in the fact that an analogue signal f of a real variable $t\in \mathbb{R}$ can be represented equivalently by a sequence of complex numbers $f{({t_{k}})_{k}}$, i.e. by a digital signal. In the sequel, this digital signal can be processed and filtered very efficiently, for example, on digital computers. The sampling theory is one of the theoretical foundations of the conversion from analog to digital signals. There is a long list of impressive research results in this area starting with the classical work of Shannon. Note that the well known Shannon sampling theory is mainly for one variable signals. In this paper, we concern with bandlimited signals of several variables, whose restriction to Euclidean space ${\mathbb{R}^{n}}$ has finite p-energy. We present sampling series, where signals are sampled at Nyquist rate. These series involve digital samples of signals and also samples of their partial derivatives. It is important that our reconstruction is stable in the sense that sampling series converge absolutely and uniformly on the whole ${\mathbb{R}^{n}}$. Therefore, having a stable reconstruction process, it is possible to bound the approximation error, which is made by using only of the partial sum with finitely many samples.