Journal:Informatica
Volume 34, Issue 2 (2023), pp. 317–336
Abstract
The performance evaluation of public charging service quality is frequently viewed as the multiple attribute group decision-making (MAGDM) issue. In this paper, an extended TOPSIS model is established to provide new means to solve the performance evaluation of public charging service quality. The TOPSIS method integrated with FUCOM method in probabilistic hesitant fuzzy circumstance is applied to rank the optional alternatives and a numerical example for performance evaluation of public charging service quality is used to test the newly proposed method’s practicability in comparison with other methods. The results display that the approach is uncomplicated, valid and simple to compute. The main results of this paper: (1) a novel PHF-TOPSIS method is proposed; (2) the extended TOPSIS method is developed in the probabilistic hesitant fuzzy environment; (3) the FUCOM method is used to obtain the attribute weight; (4) the normalization process of the original data has adapted the latest method to verify the precision; (5) The built models and methods are useful for other selection issues and evaluation issues.
Pub. online:5 Aug 2022Type:Research ArticleOpen Access
Journal:Informatica
Volume 30, Issue 1 (2019), pp. 91–116
Abstract
The evolution of Wireless Sensor Networks has led to the development of protocols that must comply with their new restrictions while being efficient in terms of energy consumption and time. We focus on a collision resolution protocol, the so-called Two Cell Sorted (2CS-WSN). We propose three different ways to improve its performance by minimizing the collision resolution time or the energy consumption. After evaluating these proposals and carrying out the comparison with the original protocol, we recommend an improvement to the protocol which reduces the elapsed time by early $8\% $ and the number of retries and conflicts more than $40\% $.
Pub. online:1 Jan 2018Type:Research ArticleOpen Access
Journal:Informatica
Volume 29, Issue 2 (2018), pp. 265–280
Abstract
In the discrete form of multi-criteria decision-making (MCDM) problems, we are usually confronted with a decision-matrix formed from the information of some alternatives on some criteria. In this study, a new method is proposed for simultaneous evaluation of criteria and alternatives (SECA) in an MCDM problem. For making this type of evaluation, a multi-objective non-linear programming model is formulated. The model is based on maximization of the overall performance of alternatives with consideration of the variation information of decision-matrix within and between criteria. The standard deviation is used to measure the within-criterion, and the correlation is utilized to consider the between-criterion variation information. By solving the multi-objective model, we can determine the overall performance scores of alternatives and the objective weights of criteria simultaneously. To validate the proposed method, a numerical example is used, and three analyses are made. Firstly, we analyse the objective weights determined by the method, secondly, the stability of the performance scores and ranking results are examined, and finally, the ranking results of the proposed method are compared with those of some existing MCDM methods. The results of the analyses show that the proposed method is efficient to deal with MCDM problems.
Pub. online:1 Jan 2018Type:Research ArticleOpen Access
Journal:Informatica
Volume 29, Issue 2 (2018), pp. 187–210
Abstract
A relevant challenge introduced by decentralized installations of photo-voltaic systems is the mismatch between green energy production and the load curve for domestic use. We advanced an ICT solution that maximizes the self-consumption by an intelligent scheduling of appliances. The predictive approach is complemented with a reactive one to minimize the short term effects due to prediction errors and to unforeseen loads. Using real measures, we demonstrated that such errors can be compensated modulating the usage of continuously running devices such as fridges and heat-pumps. Linear programming is used to dynamically compute in real-time the optimal control of these devices.
Journal:Informatica
Volume 25, Issue 2 (2014), pp. 185–208
Abstract
In this study, we evaluated the effects of the normalization procedures on decision outcomes of a given MADM method. For this aim, using the weights of a number of attributes calculated from FAHP method, we applied TOPSIS method to evaluate the financial performances of 13 Turkish deposit banks. In doing this, we used the most popular four normalization procedures. Our study revealed that vector normalization procedure, which is mostly used in the TOPSIS method by default, generated the most consistent results. Among the linear normalization procedures, max-min and max methods appeared as the possible alternatives to the vector normalization procedure.
Journal:Informatica
Volume 21, Issue 1 (2010), pp. 31–40
Abstract
As a means of supporting quality of service guarantees, aggregate multiplexing has attracted a lot of attention in the networking community, since it requires less complexity than flow-based scheduling. However, contrary to what happens in the case of flow-based multiplexing, few results are available for aggregate-based multiplexing. In this paper, we consider a server multiplexer fed by several flows and analyze the impact caused by traffic aggregation on the flows at the output of the server. No restriction is imposed on the server multiplexer other than the fact that it must operate in a work-conserving fashion. We characterize the best arrival curves that constrain the number of bits that leave the server, in any time interval, for each individual flow. These curves can be used to obtain the delays suffered by packets in complex scenarios where multiplexers are interconnected, as well as to determine the maximum size of the buffers in the different servers. Previous results provide tight delay bounds for networks where servers are of the FIFO type. Here, we provide tight bounds for any work-conserving scheduling policy, so that our results can be applied to heterogeneous networks where the servers (routers) can use different work-conserving scheduling policies such as First-In First-Out (FIFO), Earliest Deadline First (EDF), Strict Priority (SP), Guaranteed Rate scheduling (GR), etc.
Journal:Informatica
Volume 11, Issue 3 (2000), pp. 257–268
Abstract
Fingerprint ridge frequency is a global feature, which is most prominently different in fingerprints of men and woman, and it also changes within the maturing period of a person. This paper proposes the method of fingerprint pre-classification, based on the ridge frequency replacement by the density of edge points of the ridge boundary. This method is to be used after applying the common steps in most fingerprint matching algorithms, namely the fingerprint image filtering, binarization and marking of good/bad image areas. The experimental performance evaluation of fingerprint pre-classification is presented. We have found that fingerprint pre-classification using the fingerprint ridge edges density is possible, and it enables to preliminary reject part of the fingerprints without heavy loss of the recognition quality. The paper presents the evaluation of two sources of fingerprint ridge edges density variability: a) different finger pressure during the fingerprint scanning, b) different distance between the geometrical center of the fingerprint and position of the fingerprint fragment.