Journal:Informatica
Volume 24, Issue 3 (2013), pp. 381–394
Abstract
While users increasingly use such large multimedia data, more people use the cloud computing technology. It is necessary to manage large data in an efficient way, and to consider transmission efficiency for multimedia data of different quality. To this end, an important thing is to ensure efficient distribution of important resources (CPU, network and storage) which constitute cloud computing, and variable distribution algorithms are required therefor. This study proposes a method of designing a scheme for applying MapReduce of the FP-Growth algorithm which is one of data mining methods based on the Hadoop platform at the stage of IaaS (Infrastructure As a Service) including CPU, networking and storages. The method is then for allocating resources with the scheme.
Journal:Informatica
Volume 24, Issue 3 (2013), pp. 357–380
Abstract
This study proposes a model for supporting the decision making process of the cloud policy for the deployment of virtual machines in cloud environments. We explore two configurations, the static case in which virtual machines are generated according to the cloud orchestration, and the dynamic case in which virtual machines are reactively adapted according to the job submissions, using migration, for optimizing performance time metrics. We integrate both solutions in the same simulator for measuring the performance of various combinations of virtual machines, jobs and hosts in terms of the average execution and total simulation time. We conclude that the dynamic configuration is prosperus as it offers optimized job execution performance.
Journal:Informatica
Volume 24, Issue 3 (2013), pp. 339–356
Abstract
Generating sequences of random numbers or bits is a necessity in many situations (cryptography, modeling, simulations, etc…). Those sequences must be random in the sense that their behavior should be unpredictable. For example, the security of many cryptographic systems depends on the generation of unpredictable values to be used as keys. Since randomness is related to the unpredictable property, it can be described in probabilistic terms, studying the randomness of a sequence by means of a hypothesis test. A new statistical test for randomness of bit sequences is proposed in the paper. The created test is focused on determining the number of different fixed length patterns that appear along the binary sequence. When ‘few’ distinct patterns appear in the sequence, the hypothesis of randomness is rejected. On the contrary, when ‘many’ different patterns appear in the sequence, the hypothesis of randomness is accepted.
The proposed can be used as a complement of other statistical tests included in suites to study randomness. The exact distribution of the test statistic is derived and, therefore, it can be applied to short and long sequences of bits. Simulation results showed the efficiency of the test to detect deviation from randomness that other statistical tests are not able to detect. The test was also applied to binary sequences obtained from some pseudorandom number generators providing results in keeping with randomness. The proposed test distinguishes by fast computation when the critical values are previously calculated.
Journal:Informatica
Volume 24, Issue 2 (2013), pp. 315–337
Abstract
We consider a generalization of heterogeneous meta-programs by (1) introducing an extra level of abstraction within the meta-program structure, and (2) meta-program transformations. We define basic terms, formalize transformation tasks, consider properties of meta-program transformations and rules to manage complexity through the following transformation processes: (1) reverse transformation, when a correct one-stage meta-program M1 is transformed into the equivalent two-stage meta-meta-program M2; (2) two-stage forward transformations, when M2 is transformed into a set of meta-programs, and each meta-program is transformed into a set of target programs. The results are as follows: (a) formalization of the transformation processes within the heterogeneous meta-programming paradigm; (b) introduction and approval of equivalent transformations of meta-programs into meta-meta-programs and vice versa; (c) introduction of metrics to evaluate complexity of meta-specifications. The results are approved by examples, theoretical reasoning and experiments.
Journal:Informatica
Volume 24, Issue 2 (2013), pp. 291–313
Abstract
This investigate proposed a innovative Improved Hybrid PSO-GA (IHPG) algorithm which it combined the advantages of the PSO algorithm and GA algorithm. The IHPG algorithm uses the velocity and position update rules of the PSO algorithm and the GA algorithm in selection, crossover and mutation thought. This study explores the quality monitoring experiment by three existing neural network approaches to data fusion in wireless sensor module measurements. There are ten sensors deployed in a sensing area, the digital conversion and weight adjustment of the collected data need to be done. This experiment result can improve the accuracy of the estimated data and reduce the randomness of computing by adjustment optimization of smoothing parameter. According to the experimental analysis, the IHPG is better than the single PSO and GA in comparison the various neural network learning model.
Journal:Informatica
Volume 24, Issue 2 (2013), pp. 275–290
Abstract
Based on an example, we describe how outcomes of computational experiment can be employed for study of stability of numerical algorithm, provided that related theoretical propositions are not proven yet. More precisely, we propose a systematic and generalized methodology, how to investigate the influence of the weight functions α(x) and β(x), present in the integral boundary conditions, on the stability of difference schemes, for some class of parabolic equations. The ground of the methodology is the investigation of the spectrum of a matrix, defining the transition to the upper layer of the difference scheme. Spectral structure of this matrix is analysed by both analytic method and computational experiment.
Journal:Informatica
Volume 24, Issue 2 (2013), pp. 253–274
Abstract
The paper deals with the application of the theory of locally homogeneous and isotropic Gaussian fields (LHIGF) to probabilistic modelling of multivariate data structures. An asymptotic model is also studied, when the correlation function parameter of the Gaussian field tends to infinity. The kriging procedure is developed which presents a simple extrapolator by means of a matrix of degrees of the distances between pairs of the points of measurement. The resulting model is rather simple and can be defined only by the mean and variance parameters, efficiently evaluated by maximal likelihood method. The results of application of the extrapolation method developed for two analytically computed surfaces and estimation of the position of the spacecraft re-entering the atmosphere are given.
Journal:Informatica
Volume 24, Issue 2 (2013), pp. 231–251
Abstract
This paper presents a new approach for the business and information systems (IS) alignment consisting of a framework, metamodel, process, and tools for implementing it in practice. The purpose of the approach is to fill in the gap between the existing conceptual business and IS alignment frameworks and the empirical business and IS alignment methods. The suggested approach is based on the SOA, GRAAL, and enterprise modeling techniques such as TOGAF, DoDAF, and UPDM. The proposed approach is applied on four real world projects. Both the application results and the small example are provided to validate the suitability of the approach.
Journal:Informatica
Volume 24, Issue 2 (2013), pp. 219–230
Abstract
In this paper, we present a cryptanalysis of a public key cryptosystem based on the matrix combinatorial problem proposed by Wang and Hu (2010). Using lattice-based methods finding small integer solutions of modular linear equations, we recover the secret key of this cryptosystem for a certain range of parameters. In experiments, for the suggested parameters by Wang and Hu, the secret key can be recovered in seconds.
Journal:Informatica
Volume 24, Issue 2 (2013), pp. 199–217
Abstract
Inventory management is an important part of production planning process for enterprises. Decisions for strategies to determine when and how many to buy or make can be made by classifying the inventory items based on their sorts. In this evaluation, ABC inventory classification is one of the most commonly used approaches. In this study, a fuzzy analytic network process approach was proposed to determine the weights of the criteria and the scores of the inventory items were determined with simple additive weighting by using linguistic terms. Applying fuzzy ANP to a multi-criteria inventory classification problem is the novelty of this study in the related literature. In addition, the application area of the problem which is the management of the engineering vehicles' items in a construction firm is different from the other studies.