Pub. online:21 Mar 2022Type:Research ArticleOpen Access
Journal:Informatica
Volume 33, Issue 4 (2022), pp. 693–711
Abstract
Present worth (PW) analysis is an important technique in engineering economics for investment analysis. The values of PW analysis parameters such as interest rate, first cost, salvage value and annual cash flow are generally estimated including some degree of uncertainty. In order to capture the vagueness in these parameters, fuzzy sets are often used in the literature. In this study, we introduce interval-valued intuitionistic fuzzy PW analysis and circular intuitionistic fuzzy PW analysis in order to handle the impreciseness in the estimation of PW analysis parameters. Circular intuitionistic fuzzy sets are the latest extension of intuitionistic fuzzy sets defining the uncertainty of membership and non-membership degrees through a circle whose radius is r. Thus, we develop new fuzzy extensions of PW analysis including the uncertainty of membership functions. The methods are given step by step and an application for water treatment device purchasing at a local municipality is illustrated in order to show their applicability. In addition, a multi-parameter sensitivity analysis is given. Finally, discussions and suggestions for future research are given in conclusion section.
Pub. online:18 Mar 2022Type:Research ArticleOpen Access
Journal:Informatica
Volume 33, Issue 3 (2022), pp. 623–633
Abstract
The smallest enclosing circle is a well-known problem. In this paper, we propose modifications to speed-up the existing Weltzl’s algorithm. We perform the preprocessing to reduce as many input points as possible. The reduction step has lower computational complexity than the Weltzl’s algorithm and thus speed-ups its computation. Next, we propose some changes to Weltzl’s algorithm. In the end are summarized results, that show the speed-up for ${10^{6}}$ input points up to 100 times compared to the original Weltzl’s algorithm. Even more, the proposed algorithm is capable to process significantly larger data sets than the standard Weltzl’s algorithm.
Pub. online:4 Feb 2022Type:Research ArticleOpen Access
Journal:Informatica
Volume 33, Issue 1 (2022), pp. 1–33
Abstract
In Quality function deployment (QFD) approach, customers tend to express their needs in linguistic terms rather than exact numerical values and these needs generally contain vague and imprecise information. To overcome this challenge and to use the method more effectively for complex customer-oriented design problems, this paper introduces a novel intuitionistic Z-fuzzy QFD method based on Chebyshev’s inequality (CI) and applies it for a new product design. CI provides the assignment of a more objective reliability function. The reliability value is based on the maximum probability obtained from CI. Then, the expected values of lower and upper bounds of interval-valued intuitionistic fuzzy (IVIF) numbers are determined. A competitive analysis among our firm and competitor firms and an integrative analysis for the different functions of QFD is presented. The proposed Z-fuzzy QFD method is applied to the design and development of a hand sanitizer for struggling with COVID-19.
Pub. online:1 Feb 2022Type:Research ArticleOpen Access
Journal:Informatica
Volume 33, Issue 3 (2022), pp. 545–572
Abstract
During the COVID-19 pandemic, masks have become essential items for all people to protect themselves from the virus. Because of considering multiple factors when selecting an antivirus mask, the decision-making process has become more complicated. This paper proposes an integrated approach that uses F-BWM-RAFSI methods for antivirus mask selection process with respect to the COVID-19 pandemic. Finally, sensitivity analysis was demonstrated by evaluating the effects of changing the weight coefficients of the criterion on the ranking results, simulating changes in Heronian operator parameters, and comparing the obtained solution to other MCDM approaches to ensure its robustness.
Pub. online:24 Jan 2022Type:Research ArticleOpen Access
Journal:Informatica
Volume 33, Issue 1 (2022), pp. 151–179
Abstract
To resolve both certificate management and key escrow problems, a certificateless public-key system (CLPKS) has been proposed. However, a CLPKS setting must provide a revocation mechanism to revoke compromised users. Thus, a revocable certificateless public-key system (RCLPKS) was presented to address the revocation issue and, in such a system, the key generation centre (KGC) is responsible to run this revocation functionality. Furthermore, a RCLPKS setting with an outsourced revocation authority (ORA), named RCLPKS-ORA setting, was proposed to employ the ORA to alleviate the KGC’s computational burden. Very recently it was noticed that adversaries may adopt side-channel attacks to threaten these existing conventional public-key systems (including CLPKS, RCLPKS and RCLPKS-ORA). Fortunately, leakage-resilient cryptography offers a solution to resist such attacks. In this article, the first leakage-resilient revocable certificateless encryption scheme with an ORA, termed LR-RCLE-ORA scheme, is proposed. The proposed scheme is formally shown to be semantically secure against three types of adversaries in the RCLPKS and RCLPKS-ORA settings while resisting side-channel attacks. In the proposed scheme, adversaries are allowed to continually extract partial ingredients of secret keys participated in various computational algorithms of the proposed scheme while retaining its security.
Pub. online:10 Jan 2022Type:Research ArticleOpen Access
Journal:Informatica
Volume 33, Issue 1 (2022), pp. 109–130
Abstract
In this paper, a new approach has been proposed for multi-label text data class verification and adjustment. The approach helps to make semi-automated revisions of class assignments to improve the quality of the data. The data quality significantly influences the accuracy of the created models, for example, in classification tasks. It can also be useful for other data analysis tasks. The proposed approach is based on the combination of the usage of the text similarity measure and two methods: latent semantic analysis and self-organizing map. First, the text data must be pre-processed by selecting various filters to clean the data from unnecessary and irrelevant information. Latent semantic analysis has been selected to reduce the vectors dimensionality of the obtained vectors that correspond to each text from the analysed data. The cosine similarity distance has been used to determine which of the multi-label text data class should be changed or adjusted. The self-organizing map has been selected as the key method to detect similarity between text data and make decisions for a new class assignment. The experimental investigation has been performed using the newly collected multi-label text data. Financial news data in the Lithuanian language have been collected from four public websites and classified by experts into ten classes manually. Various parameters of the methods have been analysed, and the influence on the final results has been estimated. The final results are validated by experts. The research proved that the proposed approach could be helpful to verify and adjust multi-label text data classes. 82% of the correct assignments are obtained when the data dimensionality is reduced to 40 using the latent semantic analysis, and the self-organizing map size is reduced from 40 to 5 by step 5.
Pub. online:7 Jan 2022Type:Research ArticleOpen Access
Journal:Informatica
Volume 33, Issue 1 (2022), pp. 131–150
Abstract
In our daily life, we could be confronted with numerous multiple attribute group decision making (MAGDM) problems. For such problems we designed a model which employs probabilistic linguistic MABAC (multi-attributive border approximation area comparison) based on the cumulative prospect theory (CPT-PL-MABAC) method to solve the MAGDM. The CPT-PL-MABAC method can take experts’ psychological behaviour and preferences into consideration. Furthermore, we utilize the combined weight consisting of subjective weight and objective weight. The objective weight is acquired by the entropy method. Additionally, the concrete calculating steps of CPT-PL-MABAC method are proposed to solve the MAGDM for selecting the optimal location of express distribution centre. Also, a numerical example for location selection of express distribution centre is given as the justification of the usefulness of the designed method. Finally, we compare the designed model with the other three existing models, and summarize the advantages and shortcomings.
Pub. online:5 Jan 2022Type:Research ArticleOpen Access
Journal:Informatica
Volume 33, Issue 3 (2022), pp. 523–543
Abstract
In this paper we propose modifications of the well-known algorithm of particle swarm optimization (PSO). These changes affect the mapping of the motion of particles from continuous space to binary space for searching in it, which is widely used to solve the problem of feature selection. The modified binary PSO variations were tested on the dataset SVC2004 dedicated to the problem of user authentication based on dynamic features of a handwritten signature. In the example of k-nearest neighbours (kNN), experiments were carried out to find the optimal subset of features. The search for the subset was considered as a multicriteria optimization problem, taking into account the accuracy of the model and the number of features.
Pub. online:4 Jan 2022Type:Research ArticleOpen Access
Journal:Informatica
Volume 33, Issue 3 (2022), pp. 499–522
Abstract
This paper models and solves the scheduling problem of cable manufacturing industries that minimizes the total production cost, including processing, setup, and storing costs. Two hybrid meta-heuristics, which combine simulated annealing and variable neighbourhood search algorithms with tabu search algorithm, are proposed. Applying some case-based theorems and rules, a special initial solution with optimal setup cost is obtained for the algorithms. The computational experiments, including parameter tuning and final experiments over the benchmarks obtained from a real cable manufacturing factory, show superiority of the combination of tabu search and simulated annealing comparing to the other proposed hybrid and classical meta-heuristics.