Pub. online:2 Jun 2020Type:Research ArticleOpen Access
Journal:Informatica
Volume 31, Issue 2 (2020), pp. 249–275
Abstract
Emotion recognition from facial expressions has gained much interest over the last few decades. In the literature, the common approach, used for facial emotion recognition (FER), consists of these steps: image pre-processing, face detection, facial feature extraction, and facial expression classification (recognition). We have developed a method for FER that is absolutely different from this common approach. Our method is based on the dimensional model of emotions as well as on using the kriging predictor of Fractional Brownian Vector Field. The classification problem, related to the recognition of facial emotions, is formulated and solved. The relationship of different emotions is estimated by expert psychologists by putting different emotions as the points on the plane. The goal is to get an estimate of a new picture emotion on the plane by kriging and determine which emotion, identified by psychologists, is the closest one. Seven basic emotions (Joy, Sadness, Surprise, Disgust, Anger, Fear, and Neutral) have been chosen. The accuracy of classification into seven classes has been obtained approximately 50%, if we make a decision on the basis of the closest basic emotion. It has been ascertained that the kriging predictor is suitable for facial emotion recognition in the case of small sets of pictures. More sophisticated classification strategies may increase the accuracy, when grouping of the basic emotions is applied.
Pub. online:1 Jan 2018Type:Research ArticleOpen Access
Journal:Informatica
Volume 29, Issue 4 (2018), pp. 757–771
Abstract
Eye fundus imaging is a useful, non-invasive tool in disease progress tracking, in early detection of disease and other cases. Often, the disease diagnosis is made by an ophthalmologist and automatic analysis systems are used only for support. There are several commonly used features for disease detection, one of them is the artery and vein ratio measured according to the width of the main vessels. Arteries must be separated from veins automatically in order to calculate the ratio, therefore, vessel classification is a vital step. For most analysis methods high quality images are required for correct classification. This paper presents an adaptive algorithm for vessel measurements without the necessity to tune the algorithm for concrete imaging equipment or a specific situation. The main novelty of the proposed method is the extraction of blood vessel features based on vessel width measurement algorithm and vessel spatial dependency. Vessel classification accuracy rates of 0.855 and 0.859 are obtained on publicly available eye fundus image databases used for comparison with another state of the art algorithms for vessel classification in order to evaluate artery-vein ratio ($AVR$). The method is also evaluated with images that represent artery and vein size changes before and after physical load. Optomed OY digital mobile eye fundus camera Smartscope M5 PRO is used for image gathering.
Pub. online:1 Jan 2017Type:Research ArticleOpen Access
Journal:Informatica
Volume 28, Issue 3 (2017), pp. 439–452
Abstract
Radiologists need to find a position of a slice of one computed tomography (CT) scan in another scan. The image registration is a technique used to transform several images into one coordinate system and to compare them. Such transversal plane images obtained by CT scans are considered, where ribs are visible, but it does not lessen the significance of our work because many important internal organs are located here: liver, heart, stomach, pancreas, lungs, etc. The new method is developed for registration based on the mathematical model describing the rib-bounded contour. Parameters of the mathematical model and of distribution of the bone tissue on the CT scan slice form a set of features describing a particular slice. The registration method applies translation, rotation, and scaling invariances. Several strategies of translation invariance and options of the unification of scales are proposed. The method is examined on real CT scans seeking for its best performance.
Pub. online:1 Jan 2016Type:Research ArticleOpen Access
Journal:Informatica
Volume 27, Issue 2 (2016), pp. 257–281
Abstract
The estimation of intrinsic dimensionality of high-dimensional data still remains a challenging issue. Various approaches to interpret and estimate the intrinsic dimensionality are developed. Referring to the following two classifications of estimators of the intrinsic dimensionality – local/global estimators and projection techniques/geometric approaches – we focus on the fractal-based methods that are assigned to the global estimators and geometric approaches. The computational aspects of estimating the intrinsic dimensionality of high-dimensional data are the core issue in this paper. The advantages and disadvantages of the fractal-based methods are disclosed and applications of these methods are presented briefly.
Pub. online:1 Jan 2015Type:Research ArticleOpen Access
Journal:Informatica
Volume 26, Issue 3 (2015), pp. 419–434
Abstract
A secure and high-quality operation of power grids requires frequency to be managed to keep it stable around a reference value. The deviation of the frequency from this reference value is caused by the imbalance between the active power produced and consumed. In the Smart Grid paradigm, the balance can be achieved by adjusting the demand to the production constraints, instead of the other way round. In this paper, an swarm intelligence-based approach for frequency management is proposed. It is grounded on the idea that a swarm is composed of decentralised individual agents (particles) and that each of them interacts with other ones via a shared environment. Three swarm intelligence-based policies ensure a decentralised frequency management in the smart power grid, where agents of swarm are making decisions and acting on the demand side. Policies differ in behaviour function of agents. Finally, these policies are evaluated and compared using indicators that point out their advantages.
Journal:Informatica
Volume 25, Issue 4 (2014), pp. 581–616
Abstract
Abstract
The paper summarizes the results of research on the modeling and implementation of advanced planning and scheduling (APS) systems done in recent twenty years. It discusses the concept of APS system – how it is thought of today – and highlights the modeling and implementation challenges with which the developers of such systems should cope. Some from these challenges were identified as a result of the study of scientific literature, others – through an in-depth analysis of the experience gained during the development of real-world APS system – a Production Efficiency Navigator (PEN system). The paper contributes to APS systems theory by proposing the concept of an ensemble of collaborating algorithms.
Pub. online:1 Jan 2013Type:Research ArticleOpen Access
Journal:Informatica
Volume 24, Issue 1 (2013), pp. 87–102
Abstract
Frequent sequence mining is one of the main challenges in data mining and especially in large databases, which consist of millions of records. There is a number of different applications where frequent sequence mining is very important: medicine, finance, internet behavioural data, marketing data, etc. Exact frequent sequence mining methods make multiple passes over the database and if the database is large, then it is a time consuming and expensive task. Approximate methods for frequent sequence mining are faster than exact methods because instead of doing multiple passes over the original database, they analyze a much shorter sample of the original database formed in a specific way. This paper presents Markov Property Based Method (MPBM) – an approximate method for mining frequent sequences based on kth order Markov models, which makes only several passes over the original database. The method has been implemented and evaluated using real-world foreign exchange database and compared to exact and approximate frequent sequent mining algorithms.
Pub. online:1 Jan 2012Type:Research ArticleOpen Access
Journal:Informatica
Volume 23, Issue 4 (2012), pp. 563–579
Abstract
The paper proposes a novel predictive-reactive planning and scheduling framework in which both approaches are combined to complement each other in a reasonably balanced way. Neither original scheduling algorithms nor original techniques can be find in this paper. It also does not intend to invent new mechanisms or to propose some cardinally new ideas. The aim is to choose, adapt and test ideas, mechanisms and algorithms already proposed by other researchers. The focus of this research is set on make-to-order production environments. The proposed approach aims not only to absorb disruptions in shop floor level schedules but also to mitigate the impacts of potential exceptions, which disrupt mid-term level production plans. It is based on application of risk mitigation techniques and combines various simulation techniques extended by optimization procedures. The proposed approach is indented to be implemented in Advanced Planning and Scheduling system, which is an add-on for Enterprise Resources Planning system. To make it easier to understand the focus of the paper, at the beginning the position from which we start is clarified.
Pub. online:1 Jan 2011Type:Research ArticleOpen Access
Journal:Informatica
Volume 22, Issue 4 (2011), pp. 507–520
Abstract
The most classical visualization methods, including multidimensional scaling and its particular case – Sammon's mapping, encounter difficulties when analyzing large data sets. One of possible ways to solve the problem is the application of artificial neural networks. This paper presents the visualization of large data sets using the feed-forward neural network – SAMANN. This back propagation-like learning rule has been developed to allow a feed-forward artificial neural network to learn Sammon's mapping in an unsupervised way. In its initial form, SAMANN training is computation expensive. In this paper, we discover conditions optimizing the computational expenditure in visualization even of large data sets. It is shown possibility to reduce the original dimensionality of data to a lower one using small number of iterations. The visualization results of real-world data sets are presented.
Pub. online:1 Jan 2011Type:Research ArticleOpen Access
Journal:Informatica
Volume 22, Issue 1 (2011), pp. 1–10
Abstract
Estimation and modelling problems as they arise in many data analysis areas often turn out to be unstable and/or intractable by standard numerical methods. Such problems frequently occur in fitting of large data sets to a certain model and in predictive learning. Heuristics are general recommendations based on practical statistical evidence, in contrast to a fixed set of rules that cannot vary, although guarantee to give the correct answer. Although the use of these methods became more standard in several fields of sciences, their use for estimation and modelling in statistics appears to be still limited. This paper surveys a set of problem-solving strategies, guided by heuristic information, that are expected to be used more frequently. The use of recent advances in different fields of large-scale data analysis is promoted focusing on applications in medicine, biology and technology.