Pub. online:2 Jun 2020Type:Research ArticleOpen Access
Journal:Informatica
Volume 31, Issue 2 (2020), pp. 249–275
Abstract
Emotion recognition from facial expressions has gained much interest over the last few decades. In the literature, the common approach, used for facial emotion recognition (FER), consists of these steps: image pre-processing, face detection, facial feature extraction, and facial expression classification (recognition). We have developed a method for FER that is absolutely different from this common approach. Our method is based on the dimensional model of emotions as well as on using the kriging predictor of Fractional Brownian Vector Field. The classification problem, related to the recognition of facial emotions, is formulated and solved. The relationship of different emotions is estimated by expert psychologists by putting different emotions as the points on the plane. The goal is to get an estimate of a new picture emotion on the plane by kriging and determine which emotion, identified by psychologists, is the closest one. Seven basic emotions (Joy, Sadness, Surprise, Disgust, Anger, Fear, and Neutral) have been chosen. The accuracy of classification into seven classes has been obtained approximately 50%, if we make a decision on the basis of the closest basic emotion. It has been ascertained that the kriging predictor is suitable for facial emotion recognition in the case of small sets of pictures. More sophisticated classification strategies may increase the accuracy, when grouping of the basic emotions is applied.
Pub. online:19 May 2020Type:Research ArticleOpen Access
Journal:Informatica
Volume 31, Issue 2 (2020), pp. 205–224
Abstract
We consider a geographical region with spatially separated customers, whose demand is currently served by some pre-existing facilities owned by different firms. An entering firm wants to compete for this market locating some new facilities. Trying to guarantee a future satisfactory captured demand for each new facility, the firm imposes a constraint over its possible locations (a finite set of candidates): a new facility will be opened only if a minimal market share is captured in the short-term. To check that, it is necessary to know the exact captured demand by each new facility. It is supposed that customers follow the partially binary choice rule to satisfy its demand. If there are several new facilities with maximal attraction for a customer, we consider that the proportion of demand captured by the entering firm will be equally distributed among such facilities (equity-based rule). This ties breaking rule involves that we will deal with a nonlinear constrained discrete competitive facility location problem. Moreover, minimal attraction conditions for customers and distances approximated by intervals have been incorporated to deal with a more realistic model. To solve this nonlinear model, we first linearize the model, which allows to solve small size problems because of its complexity, and then, for bigger size problems, a heuristic algorithm is proposed, which could also be used to solve other constrained problems.
Pub. online:6 May 2020Type:Research ArticleOpen Access
Journal:Informatica
Volume 31, Issue 3 (2020), pp. 481–497
Abstract
Data hiding technique is an important multimedia security technique and has been applied to many domains, for example, relational databases. The existing data hiding techniques for relational databases cannot restore raw data after hiding. The purpose of this paper is to propose the first reversible hiding technique for the relational database. In hiding phase, it hides confidential messages into a relational database by the LSB (Least-Significant-Bit) matching method for relational databases. In extraction and restoration phases, it gets the confidential messages through the LSB and LSB matching method for relational databases. Finally, the averaging method is used to restore the raw data. According to the experiments, our proposed technique meets data hiding requirements. It not only enables to recover the raw data, but also maintains a high hiding capacity. The complexity of our algorithms shows their efficiencies.
Pub. online:6 May 2020Type:Research ArticleOpen Access
Journal:Informatica
Volume 31, Issue 3 (2020), pp. 435–458
Abstract
In data mining research, outliers usually represent extreme values that deviate from other observations on data. The significant issue of existing outlier detection methods is that they only consider the object itself not taking its neighbouring objects into account to extract location features. In this paper, we propose an innovative approach to this issue. First, we propose the notions of centrality and centre-proximity for determining the degree of outlierness considering the distribution of all objects. We also propose a novel graph-based algorithm for outlier detection based on the notions. The algorithm solves the problems of existing methods, i.e. the problems of local density, micro-cluster, and fringe objects. We performed extensive experiments in order to confirm the effectiveness and efficiency of our proposed method. The obtained experimental results showed that the proposed method uncovers outliers successfully, and outperforms previous outlier detection methods.
Pub. online:6 May 2020Type:Research ArticleOpen Access
Journal:Informatica
Volume 31, Issue 2 (2020), pp. 299–312
Abstract
The crosstalk error is widely used to evaluate the performance of blind source separation. However, it needs to know the global separation matrix in advance, and it is not robust. In order to solve these problems, a new adaptive algorithm for calculating crosstalk error is presented, which calculates the crosstalk error by a cost function of least squares criterion, and the robustness of the crosstalk error is improved by introducing the position information of the maximum value in the global separation matrix. Finally, the method is compared with the conventional RLS algorithms in terms of performance, robustness and convergence rate. Furthermore, its validity is verified by simulation experiments and real world signals experiments.
Pub. online:6 May 2020Type:Research ArticleOpen Access
Journal:Informatica
Volume 31, Issue 2 (2020), pp. 277–298
Abstract
The vulnerable part of communications between user and server is the poor authentication level at the user’s side. For example, in e-banking systems for user authentication are used passwords that can be lost or swindled by a person maliciously impersonating bank.
To increase the security of e-banking system users should be supplied by the elements of public key infrastructure (PKI) but not necessary to the extent of standard requirements which are too complicated for ordinary users.
In this paper, we propose two versions of authenticated key agreement protocol (AKAP) which can be simply realized on the user’s side. AKAP is a collection of cryptographic functions having provable security properties.
It is proved that AKAP1 is secure against active adversary under discrete logarithm assumption when formulated certain conditions hold. AKAP2 provides user’s anonymity against eavesdropping adversary. The partial security of AKAP2 is investigated which relies on the security of asymmetric encryption function.
Journal:Informatica
Volume 31, Issue 3 (2020), pp. 459–479
Abstract
After Morris and Thompson wrote the first paper on password security in 1979, strict password policies have been enforced to make sure users follow the rules on passwords. Many such policies require users to select and use a system-generated password. The objective of this paper is to analyse the effectiveness of strict password management policies with respect to how users remember system-generated passwords of different textual types – plaintext strings, passphrases, and hybrid graphical-textual PsychoPass passwords. In an experiment, participants were assigned a random string, passphrase, and PsychoPass passwords and had to memorize them. Surprisingly, no one has remembered either the random string or the passphrase, whereas only 10% of the participants remembered their PsychoPass password. The policies where administrators let systems assign passwords to users are not appropriate. Although PsychoPass passwords are easier to remember, the recall rate of any system-assigned password is below the acceptable level. The findings of this study explain that system-assigned strong passwords are inappropriate and put unacceptable memory burden on users.
Journal:Informatica
Volume 31, Issue 2 (2020), pp. 331–357
Abstract
In practice, the judgments of decision-makers are often uncertain and thus cannot be represented by accurate values. In this study, the opinions of decision-makers are collected based on grey linguistic variables and the data retains the grey nature throughout all the decision-making process. A grey best-worst method (GBWM) is developed for multiple experts multiple criteria decision-making problems that can employ grey linguistic variables as input data to cover uncertainty. An example is solved by the GBWM and then a sensitivity analysis is done to show the robustness of the method. Comparative analyses verify the validity and advantages of the GBWM.
Journal:Informatica
Volume 32, Issue 1 (2021), pp. 195–216
Abstract
In this paper, the CODAS (Combinative Distance-based Assessment) is utilized to address some MAGDM issues by using picture 2-tuple linguistic numbers (P2TLNs). At first, some essential concepts of picture 2-tuple linguistic sets (P2TLSs) are briefly reviewed. Then, the CODAS method with P2TLNs is constructed and all calculating procedures are simply depicted. Eventually, an empirical application of green supplier selection has been offered to demonstrate this novel method and some comparative analysis between the CODAS method with P2TLNs and several methods are also made to confirm the merits of the developed method.
Journal:Informatica
Volume 31, Issue 3 (2020), pp. 539–560
Abstract
In this paper, we present an effective algorithm for solving the Poisson–Gaussian total variation model. The existence and uniqueness of solution for the mixed Poisson–Gaussian model are proved. Due to the strict convexity of the model, the split-Bregman method is employed to solve the minimization problem. Experimental results show the effectiveness of the proposed method for mixed Poisson–Gaussion noise removal. Comparison with other existing and well-known methods is provided as well.