Pub. online:1 Jan 2018Type:Research ArticleOpen Access
Journal:Informatica
Volume 29, Issue 4 (2018), pp. 633–650
Abstract
In recent years, Wireless Sensor Networks (WSNs) received great attention because of their important applications in many areas. Consequently, a need for improving their performance and efficiency, especially in energy awareness, is of a great interest. Therefore, in this paper, we proposed a lifetime improvement fixed clustering energy awareness routing protocol for WSNs named Load Balancing Cluster Head (LBCH) protocol. LBCH mainly aims at reducing the energy consumption in the network and balancing the workload over all nodes within the network. A novel method for selecting initial cluster heads (CHs) is proposed. In addition, the network nodes are evenly distributed into clusters to build balanced size clusters. Finally, a novel scheme is proposed to circulate the role of CHs depending on the energy and location information of each node in each cluster. Multihop technique is used to minimize the communication distance between CHs and the base station (BS) thus saving nodes energy. In order to evaluate the performance of LBCH, a thorough simulation has been conducted and the results are compared with other related protocols (i.e. ACBEC-WSNs-CD, Adaptive LEACH-F, LEACH-F, and RRCH). The simulations showed that LBCH overcomes other related protocols for both continuous data and event-based data models at different network densities. LBCH achieved an average improvement in the range of 2–172%, 18–145.5%, 10.18–62%, 63–82.5% over the compared protocols in terms of number of alive nodes, first node died (FND), network throughput, and load balancing, respectively.
Pub. online:1 Jan 2018Type:Research ArticleOpen Access
Journal:Informatica
Volume 29, Issue 2 (2018), pp. 265–280
Abstract
In the discrete form of multi-criteria decision-making (MCDM) problems, we are usually confronted with a decision-matrix formed from the information of some alternatives on some criteria. In this study, a new method is proposed for simultaneous evaluation of criteria and alternatives (SECA) in an MCDM problem. For making this type of evaluation, a multi-objective non-linear programming model is formulated. The model is based on maximization of the overall performance of alternatives with consideration of the variation information of decision-matrix within and between criteria. The standard deviation is used to measure the within-criterion, and the correlation is utilized to consider the between-criterion variation information. By solving the multi-objective model, we can determine the overall performance scores of alternatives and the objective weights of criteria simultaneously. To validate the proposed method, a numerical example is used, and three analyses are made. Firstly, we analyse the objective weights determined by the method, secondly, the stability of the performance scores and ranking results are examined, and finally, the ranking results of the proposed method are compared with those of some existing MCDM methods. The results of the analyses show that the proposed method is efficient to deal with MCDM problems.
Pub. online:1 Jan 2018Type:Research ArticleOpen Access
Journal:Informatica
Volume 29, Issue 2 (2018), pp. 187–210
Abstract
A relevant challenge introduced by decentralized installations of photo-voltaic systems is the mismatch between green energy production and the load curve for domestic use. We advanced an ICT solution that maximizes the self-consumption by an intelligent scheduling of appliances. The predictive approach is complemented with a reactive one to minimize the short term effects due to prediction errors and to unforeseen loads. Using real measures, we demonstrated that such errors can be compensated modulating the usage of continuously running devices such as fridges and heat-pumps. Linear programming is used to dynamically compute in real-time the optimal control of these devices.
Journal:Informatica
Volume 21, Issue 1 (2010), pp. 31–40
Abstract
As a means of supporting quality of service guarantees, aggregate multiplexing has attracted a lot of attention in the networking community, since it requires less complexity than flow-based scheduling. However, contrary to what happens in the case of flow-based multiplexing, few results are available for aggregate-based multiplexing. In this paper, we consider a server multiplexer fed by several flows and analyze the impact caused by traffic aggregation on the flows at the output of the server. No restriction is imposed on the server multiplexer other than the fact that it must operate in a work-conserving fashion. We characterize the best arrival curves that constrain the number of bits that leave the server, in any time interval, for each individual flow. These curves can be used to obtain the delays suffered by packets in complex scenarios where multiplexers are interconnected, as well as to determine the maximum size of the buffers in the different servers. Previous results provide tight delay bounds for networks where servers are of the FIFO type. Here, we provide tight bounds for any work-conserving scheduling policy, so that our results can be applied to heterogeneous networks where the servers (routers) can use different work-conserving scheduling policies such as First-In First-Out (FIFO), Earliest Deadline First (EDF), Strict Priority (SP), Guaranteed Rate scheduling (GR), etc.
Journal:Informatica
Volume 11, Issue 3 (2000), pp. 257–268
Abstract
Fingerprint ridge frequency is a global feature, which is most prominently different in fingerprints of men and woman, and it also changes within the maturing period of a person. This paper proposes the method of fingerprint pre-classification, based on the ridge frequency replacement by the density of edge points of the ridge boundary. This method is to be used after applying the common steps in most fingerprint matching algorithms, namely the fingerprint image filtering, binarization and marking of good/bad image areas. The experimental performance evaluation of fingerprint pre-classification is presented. We have found that fingerprint pre-classification using the fingerprint ridge edges density is possible, and it enables to preliminary reject part of the fingerprints without heavy loss of the recognition quality. The paper presents the evaluation of two sources of fingerprint ridge edges density variability: a) different finger pressure during the fingerprint scanning, b) different distance between the geometrical center of the fingerprint and position of the fingerprint fragment.