Journal:Informatica
Volume 25, Issue 2 (2014), pp. 283–298
Abstract
New asymmetric cipher based on matrix power function is presented. Cipher belongs to the class of recently intensively evolving non-commuting cryptography due to expectation of its resistance to potential quantum cryptanalysis.
The algebraic structures for proposed cipher construction are defined. Security analysis was performed and security parameters are defined. On the base of this research the secure parameters values are determined. The comparison of efficiency of microprocessor realization of proposed algorithm with different security parameters values is presented.
Journal:Informatica
Volume 25, Issue 2 (2014), pp. 265–282
Abstract
The aim of this study is to predict the energy generated by a solar thermal system. To achieve this, a hybrid intelligent system was developed based on local regression models with low complexity and high accuracy. Input data is divided into clusters by using a Self Organization Maps; a local model will then be created for each cluster. Different regression techniques were tested and the best one was chosen. The novel hybrid regression system based on local models is empirically verified with a real dataset obtained by the solar thermal system of a bioclimatic house.
Journal:Informatica
Volume 25, Issue 2 (2014), pp. 241–264
Abstract
The optimal financial investment (Portfolio) problem was investigated by leading financial organizations and scientists. Nobel prizes were awarded for the Modern Portfolio Theory (MPT) and further developments. The aim of these works was to define the optimal diversification of the assets depending on the acceptable risk level.
In contrast, the objective of this work is to provide a flexible, easily adaptable model of virtual financial markets designed for the needs of individual users in the context of utility theory. The aim is to optimize investment strategies. This aim is the new element of the proposed model and simulation system since optimization is performed in the space of investment strategies; both short term and longer term.
The new and unexpected result of experiments with the historical financial time series using the PORTFOLIO model is the observation that the minimal prediction errors do not provide the maximal profits.
Journal:Informatica
Volume 25, Issue 2 (2014), pp. 221–239
Abstract
An application of fuzzy modeling to the problem of telecommunications time-series prediction is proposed in this paper. The model building process is a two-stage sequential algorithm, based on Subtractive Clustering (SC) and the Orthogonal Least Squares (OLS) techniques. Particularly, the SC is first employed to partition the input space and determine the number of fuzzy rules and the premise parameters. In the sequel, an orthogonal estimator determines the input terms which should be included in the consequent part of each fuzzy rule and calculate their parameters. A comparative analysis with well-established forecasting models is conducted on real world telecommunications data, where the characteristics of the proposed forecaster are highlighted.
Journal:Informatica
Volume 25, Issue 2 (2014), pp. 209–220
Abstract
The paper presents a novel algorithm for restoration of the missing samples in additive Gaussian noise based on the forward–backward autoregressive (AR) parameter estimation approach and the extrapolation technique. The proposed algorithm is implemented in two consecutive steps. In the first step, the forward–backward approach is used to estimate the parameters of the given neighbouring segments, while in the second step the extrapolation technique for the segments is applied to restore the samples of the missing segment. The experimental results demonstrate that the restoration error of the samples of the missing segment using the proposed algorithm is reduced as compared with the Burg algorithm.
Journal:Informatica
Volume 25, Issue 2 (2014), pp. 185–208
Abstract
In this study, we evaluated the effects of the normalization procedures on decision outcomes of a given MADM method. For this aim, using the weights of a number of attributes calculated from FAHP method, we applied TOPSIS method to evaluate the financial performances of 13 Turkish deposit banks. In doing this, we used the most popular four normalization procedures. Our study revealed that vector normalization procedure, which is mostly used in the TOPSIS method by default, generated the most consistent results. Among the linear normalization procedures, max-min and max methods appeared as the possible alternatives to the vector normalization procedure.
Journal:Informatica
Volume 25, Issue 1 (2014), pp. 155–184
Abstract
In the paper we propose a genetic algorithm based on insertion heuristics for the vehicle routing problem with constraints. A random insertion heuristic is used to construct initial solutions and to reconstruct the existing ones. The location where a randomly chosen node will be inserted is selected by calculating an objective function. The process of random insertion preserves stochastic characteristics of the genetic algorithm and preserves feasibility of generated individuals. The defined crossover and mutation operators incorporate random insertion heuristics, analyse individuals and select which parts should be reinserted. Additionally, the second population is used in the mutation process. The second population increases the probability that the solution, obtained in the mutation process, will survive in the first population and increase the probability to find the global optimum. The result comparison shows that the solutions, found by the proposed algorithm, are similar to the optimal solutions obtained by other genetic algorithms. However, in most cases the proposed algorithm finds the solution in a shorter time and it makes this algorithm competitive with others.
Journal:Informatica
Volume 25, Issue 1 (2014), pp. 139–154
Abstract
Trust is an important factor for successful e-commerce and e-media applications. However, these media inherently disable many ordinary communication channels and means, and affect trust forming factors. Therefore cyber environment requires additional support when it comes to trust. This is also one key reason why computational trust management methods are being developed now for some fifteen years, while another key reason is to enable better decision making through mathematical modeling and simulations in other areas. These methods are grounded on certain premises, which are analyzed in this paper. On this basis, Qualitative assessment dynamics (QAD for short) is presented that complements the above methods. As opposed to other methods, it is aligned with certain principles of human reasoning. Therefore it further extends the scope of other computational trust management technologies that are typically concerned with artificial ways of reasoning, while QAD gives a basis also for applications in ordinary environments where humans are involved. By using this methodology, experimental work will be presented, applied to the area of organizations and human factor management.
Journal:Informatica
Volume 25, Issue 1 (2014), pp. 113–137
Abstract
This paper presents an adaptive image-watermarking technique based on just-noticeable distortion (JND) profile and fuzzy inference system (FIS) optimized with genetic algorithm (GA). Here it is referred to as the AIWJFG technique. During watermark embedding, it embeds a watermark into an image by referring the JND profile of the image so as to make the watermark more imperceptible. It employs image features and local statistics in the construction of an FIS, and then exploits the FIS to extract watermarks without original images. In addition, the FIS can be further optimized by a GA to improve its watermark-extraction performance remarkably. Experimental results demonstrate that the AIWJFG technique not only makes the embedded watermarks further imperceptible but also possesses adaptive and robust capabilities to resist on image-manipulation attacks being considered in the paper.
Journal:Informatica
Volume 25, Issue 1 (2014), pp. 95–111
Abstract
Nowadays data mining algorithms are successfully applying to analyze the real data in our life to provide useful suggestion. Since some available real data is multi-valued and multi-labeled, researchers have focused their attention on developing approaches to mine multi-valued and multi-labeled data in recent years. Unfortunately, there are no algorithms can discretize multi-valued and multi-labeled data to improve the performance of data mining. In this paper, we proposed a novel approach to solve this problem. Our approach is based on a statistical-based discretization metric and the simulated annealing search algorithm. Experimental results show that our approach can effectively improve the performance of the-state-of-art multi-valued and multi-labeled classification algorithm.