Pub. online:19 Oct 2023Type:Research ArticleOpen Access
Journal:Informatica
Volume 35, Issue 1 (2024), pp. 47–63
Abstract
In this paper, we introduce a novel Model Based Foggy Image Enhancement using Levenberg-Marquardt non-linear estimation (MBFIELM). It presents a solution for enhancing image quality that has been compromised by homogeneous fog. Given an observation set represented by a foggy image, it is desired to estimate an analytical function dependent on adjustable variables that best cross the data in order to approximate them. A cost function is used to measure how the estimated function fits the observation set. Here, we use the Levenberg-Marquardt algorithm, a combination of the Gradient descent and the Gauss-Newton method, to optimize the non-linear cost function. An inverse transformation will result in an enhanced image. Both visual assessments and quantitative assessments, the latter utilizing a quality defogged image measure introduced by Liu et al. (2020), are highlighted in the experimental results section. The efficacy of MBFIELM is substantiated by metrics comparable to those of recognized algorithms like Artificial Multiple Exposure Fusion (AMEF), DehazeNet (a trainable end-to-end system), and Dark Channel Prior (DCP). There exist instances where the performance indices of AMEF exceed those of our model, yet there are situations where MBFIELM asserts superiority, outperforming these standard-bearers in algorithmic efficacy.
Pub. online:12 Oct 2023Type:Research ArticleOpen Access
Journal:Informatica
Volume 34, Issue 4 (2023), pp. 825–845
Abstract
In many industrial sectors, the current digitalization trend resulted in new products and services that exploit the potential of built-in sensors, actuators, and control systems. The business models related to these products and services usually are data-driven and integrated into digital ecosystems. Quantified products (QP) are a new product category that exploits data of individual product instances and fleets of instances. A quantified product is a product whose instances collect data about themselves that can be measured or, by design, leave traces of data. The QP design has to consider what dependencies exist between the actual product, services related to the product, and the digital ecosystem of the services. By investigating three industrial case studies, the paper contributes to a better understanding of typical features of QP and the implications of these features for the design of products and services. For this purpose, we combine the analysis of features of QP potentially affecting design with an analysis of dependencies between features. The main contributions of the work are (1) three case studies describing QP design and development, (2) a set of recurring features of QPs derived from the cases, and (3) a feature model capturing design dependencies of these features.
Journal:Informatica
Volume 34, Issue 4 (2023), pp. 713–742
Abstract
In this paper, we introduce the concept of circular Pythagorean fuzzy set (value) (C-PFS(V)) as a new generalization of both circular intuitionistic fuzzy sets (C-IFSs) proposed by Atannassov and Pythagorean fuzzy sets (PFSs) proposed by Yager. A circular Pythagorean fuzzy set is represented by a circle that represents the membership degree and the non-membership degree and whose centre consists of non-negative real numbers μ and ν with the condition ${\mu ^{2}}+{\nu ^{2}}\leqslant 1$. A C-PFS models the fuzziness of the uncertain information more properly thanks to its structure that allows modelling the information with points of a circle of a certain centre and a radius. Therefore, a C-PFS lets decision makers to evaluate objects in a larger and more flexible region and thus more sensitive decisions can be made. After defining the concept of C-PFS we define some fundamental set operations between C-PFSs and propose some algebraic operations between C-PFVs via general triangular norms and triangular conorms. By utilizing these algebraic operations, we introduce some weighted aggregation operators to transform input values represented by C-PFVs to a single output value. Then to determine the degree of similarity between C-PFVs we define a cosine similarity measure based on radius. Furthermore, we develop a method to transform a collection of Pythagorean fuzzy values to a C-PFS. Finally, a method is given to solve multi-criteria decision making problems in circular Pythagorean fuzzy environment and the proposed method is practiced to a problem about selecting the best photovoltaic cell from the literature. We also study the comparison analysis and time complexity of the proposed method.
Journal:Informatica
Volume 34, Issue 3 (2023), pp. 603–616
Abstract
The article presents the tax declaration scheme using blockchain confidential transactions based on the modified ElGamal encryption providing additively-homomorphic property. Transactions are based on the unspent transactions output (UTxO) paradigm allowing to effectively represent digital asset of cryptocurrencies in e-wallets and to perform financial operations. The main actors around transaction are specified, include money senders, receivers, transaction creator, Audit Authority (AA) and Net of users. A general transaction model with M inputs and N outputs is created, providing transaction amount confidentiality and verifiability for all actors with different levels of available information.
The transaction model allows Net to verify the validity of a transaction, having access only to encrypted transaction data. Each money receiver is able to decrypt and verify the actual sum that is transferred by the sender. AA is provided with actual transaction values and is able to supervise the tax payments for business actors. Such information allows to verify the honesty of transaction data for each user role.
The security analysis of the scheme is presented, referencing to ElGamal security assumptions. The coalition attack is formulated and prevention of this attack is proposed. It is shown that transaction creation is effective and requires almost the same resources as multiple ElGamal encryption. In addition to ElGamal encryption of all income and expenses, an additional exponentiation operation with small exponents, representing transferred sums, is needed. AA computation resources are slightly larger, since they have to be adequate for search procedures in the small range from 1 to ${2^{32}}-1=4294967295$ for individual money transfers.
Journal:Informatica
Volume 34, Issue 3 (2023), pp. 577–602
Abstract
Healthcare has seen many advances in sensor technology, but with recent improvements in networks and the addition of the Internet of Things, it is even more promising. Current solutions to managing healthcare data with cloud computing may be unreliable at the most critical moments. High response latency, large volumes of data, and security are the main issues of this approach. The promising solution is fog computing, which offers an immediate response resistant to disconnections and ways to process big data using real-time analytics and artificial intelligence (AI). However, fog computing has not yet matured and there are still many challenges. This article presents for a computer scientist a systematic review of the literature on fog computing in healthcare. Articles published in six years are analysed from the service, software, hardware, information technologies and mobility with autonomy perspectives. The contribution of this study includes an analysis of recent trends, focus areas and benefits of the use of AI techniques in fog computing e-health applications.
Journal:Informatica
Volume 34, Issue 4 (2023), pp. 771–794
Abstract
The quality of the input data is amongst the decisive factors affecting the speed and effectiveness of recurrent neural network (RNN) learning. We present here a novel methodology to select optimal training data (those with the highest learning capacity) by approaching the problem from a decision making point of view. The key idea, which underpins the design of the mathematical structure that supports the selection, is to define first a binary relation that gives preference to inputs with higher estimator abilities. The Von Newman Morgenstern theorem (VNM), a cornerstone of decision theory, is then applied to determine the level of efficiency of the training dataset based on the probability of success derived from a purpose-designed framework based on Markov networks. To the best of the author’s knowledge, this is the first time that this result has been applied to data selection tasks. Hence, it is shown that Markov Networks, mainly known as generative models, can successfully participate in discriminative tasks when used in conjunction with the VNM theorem.
The simplicity of our design allows the selection to be carried out alongside the training. Hence, since learning progresses with only the optimal inputs, the data noise gradually disappears: the result is an improvement in the performance while minimising the likelihood of overfitting.
Journal:Informatica
Volume 34, Issue 3 (2023), pp. 491–527
Abstract
Embedding models turn words/documents into real-number vectors via co-occurrence data from unrelated texts. Crafting domain-specific embeddings from general corpora with limited domain vocabulary is challenging. Existing solutions retrain models on small domain datasets, overlooking potential of gathering rich in-domain texts. We exploit Named Entity Recognition and Doc2Vec for autonomous in-domain corpus creation. Our experiments compare models from general and in-domain corpora, highlighting that domain-specific training attains the best outcome.
Journal:Informatica
Volume 34, Issue 3 (2023), pp. 465–489
Abstract
The Best-Worst Method (BWM) is a recently introduced, innovative multi-criteria decision-making (MCDM) technique used to determine criterion weights for selection processes. However, another method is needed to complete the selection of the most preferred alternative. In this research, we propose a group decision-making methodology based on the multiplicative BWM to make this selection. Furthermore, we give new models that allow for groups with different best and worst criteria to exist. This capability is crucial in reconciling the differences among experts from various geographical locations with diverse evaluation perspectives influenced by social and cultural disparities. Our work contributes significantly in three ways: (1) we propose a BWM-based methodology for evaluating alternatives, (2) we present new linear models that facilitate decision-making for groups with different best and worst criteria, and (3) we develop a dissimilarity ratio to quantify the differences in expert opinions. The methodology is illustrated via numerical experiments for a global car company deciding which car model alternative to introduce in its markets.
Journal:Informatica
Volume 34, Issue 3 (2023), pp. 665–677
Abstract
Due to the complexity and lack of transparency of recent advances in artificial intelligence, Explainable AI (XAI) emerged as a solution to enable the development of causal image-based models. This study examines shadow detection across several fields, including computer vision and visual effects. Three-fold approaches were used to construct a diverse dataset, integrate structural causal models with shadow detection, and apply interventions simultaneously for detection and inferences. While confounding factors have only a minimal impact on cause identification, this study illustrates how shadow detection enhances understanding of both causal inference and confounding variables.
Pub. online:28 Aug 2023Type:Research ArticleOpen Access
Journal:Informatica
Volume 34, Issue 3 (2023), pp. 529–556
Abstract
Ineffective evaluation of open-source software learning management system (OSS-LMS) packages can negatively impact organizational effectiveness. Clients may struggle to select the best OSS-LMS package from a wide range of options, leading to a complex multi-criteria group decision-making (MCGDM) problem. This evaluates OSS-LMS packages based on several criteria like usability, functionality, e-learning standards, reliability, activity tracking, course development, assessment, backup and recovery, error reporting, efficiency, operating system compatibility, computer-managed instruction, authentication, authorization, troubleshooting, maintenance, upgrading, and scalability. Handling uncertain data is a vital aspect of OSS-LMS package evaluation. To tackle MCGDM issues, this study presents a consensus weighted sum product (c-WASPAS) method which is applied to an educational OSS-LMS package selection problem to evaluate four OSS-LMS packages, namely ATutor, eFront, Moodle, and Sakai. The findings indicate that the priority order of alternatives is Moodle > Sakai > eFront > ATutor and, therefore, MOODLE is the best OSS-LMS package for the case study. A sensitivity analysis of criteria weights is also conducted, as well as a comparative study, to demonstrate the effectiveness of the proposed method. It is essential to note that proper OSS-LMS package evaluation is crucial to avoid negative impacts on organizational performance. By addressing MCGDM issues and dealing with uncertain information, the c-WASPAS method presented in this study can assist clients in selecting the most appropriate OSS-LMS package from multiple alternatives. The findings of this study can benefit educational institutions and other organizations that rely on OSS-LMS packages to run their operations.