Pub. online:25 Jun 2025Type:Research ArticleOpen Access
Journal:Informatica
Volume 36, Issue 3 (2025), pp. 491–524
Abstract
Traditional Anti-Money Laundering (AML) systems rely on rule-based approaches, which often fail to adapt to evolving money laundering tactics and produce high false-positive rates, overwhelming compliance teams. This study proposes an innovative machine learning (ML) framework that leverages Conditional Tabular Generative Adversarial Networks (CTGANs) to address severe class imbalance, a common challenge in Suspicious Activity Reporting (SAR). Implemented in Python, CTGAN generates realistic synthetic samples to enhance minority-class representation, improving recall and F1-scores. For instance, the Random Forest (RF) model achieves a recall of 0.991 and an F1-score of 0.528 in oversampled datasets with engineered variables, highlighting the effectiveness of CTGAN in mitigating imbalance. This framework also incorporates SQL-based feature engineering using Oracle Analytics, creating dynamic variables such as cumulative sums, rolling averages, and ranks. The modelling phase and exploratory data analysis are conducted in the SAS programming language, employing Logistic Regression (LR) as baseline, Decision Trees (DT), and RF. Evaluation across undersampled and oversampled datasets, combined with varying probability thresholds, reveals key trade-offs between sensitivity and precision. Among the models, RF consistently achieves the highest ROC-AUC scores, ranging from 0.945 in undersampled datasets to 0.951 in oversampled configurations, demonstrating its robustness and accuracy in SAR detection. By integrating CTGAN and TF-IDF (textual feature transformation in Python) with SQL-engineered variables, this framework provides a comprehensive data-driven approach to AML. It reduces false positives, strengthens the detection of suspicious activities, and ensures scalability, adaptability, and compliance with regulatory standards.
Journal:Informatica
Volume 36, Issue 3 (2025), pp. 525–555
Abstract
Industries have increasingly adopted supply chain management practices to sustain competitive advantage, fostering collaboration among supply chain partners for effective coordination. While prior research has explored whether inter-partner relationships influence supply chain network performance, these studies have primarily focused on perceived effects rather than emprical observations. This study investigates the impact of trust on supply chain network performance through linguistic summarization. Its originality lies in integrating linguistic summarization with heterogeneous information network modelling, a novel method for evaluating trust-driven performance effects in supply chains. We modelled supply chain networks as heterogeneous information networks, representing companies and products as distinct node types, and their interactions as varied link types. A linguistic summarization framework was developed for these networks, and its application in the automotive industry enabled the validation of literature-derived hypotheses through the truth degree of linguistic summaries. The findings demonstrate that trust significantly enhances organizational performance, particularly in terms of profitability. Supply chain managers, analysts, and researchers especially gain from this study since it offers a data-driven, interpretable framework for assessing how trust affects network performance, which promotes cooperation, transparency, and decision-making.
Journal:Informatica
Volume 36, Issue 3 (2025), pp. 557–588
Abstract
Ordered Weighted Averaging (OWA) operators have been widely applied in Group Decision-Making (GDM) to fuse expert opinions. However, their effectiveness depends on the selection of an appropriate weighting vector, which remains a challenge due to limited research on its impact on Consensus Reaching Processes (CRPs). This paper addresses this gap by analysing the influence of different OWA weighting techniques on consensus formation, particularly in large-scale GDM (LSGDM) scenarios. To do so, we propose a Comprehensive Minimum Cost Consensus (CMCC) model that integrates OWA operators with classical consensus measures to enhance the decision-making process. Since existing OWA-based Minimum Cost Consensus (MCC) models struggle with computational complexity, we introduce linearized versions of the OWA-based CMCC model tailored for LSGDM applications. Furthermore, we conduct a detailed comparison of various OWA weight allocation methods, assessing their impact on consensus quality under different levels of expert participation and opinion polarization. Additionally, our linearized formulations significantly reduce the computational cost for OWA-based CMCC models, improving their scalability.
Journal:Informatica
Volume 36, Issue 3 (2025), pp. 589–624
Abstract
Nowadays, it is agreed that fuzzy sets are suitable for capturing and representing the concept of vagueness and uncertainty, and various fuzzy reasoning systems are being developed based on them. Researchers have proposed fuzzy set extensions to improve the performance and accuracy of these systems. The research questions arise regarding how fuzzy sets have evolved and what the main trends in their evolution are. To address these questions, our research presents a chronological and bibliometric analysis of fuzzy sets based on papers extracted from the Web of Science database. The main findings and contributions have been identified, systematized and visualized in a fuzzy set keyword map of 65 fuzzy set extensions. These extensions are primarily used for decision-making, reasoning, and prediction, particularly in the context of digital transformation, by integrating digital technologies into all areas of business, transforming operations and enhancing value delivery to customers. As organisations increasingly adopt digital technologies, the need for robust frameworks to manage uncertainty becomes critical. The main trends indicating the directions of fuzzy sets development, an overview of the variety and popularity of fuzzy sets over the years, and the impact of countries engaged in fuzzy set research are also identified and reported. The results support researchers and practitioners working on fuzzy sets and their applications by providing valuable insights into the fuzzy set topic, its existing extensions, and, more generally, to any field of investigation where fuzzy sets are relevant, particularly in the realm of digital transformation.
Pub. online:26 May 2025Type:Research ArticleOpen Access
Journal:Informatica
Volume 36, Issue 3 (2025), pp. 625–655
Abstract
This study proposes a novel method called the “Integrative Reference Point Approach (IRPA)” as an alternative method to existing MCDM methods. The basis of the newly proposed method is the satisfaction function and the reference set approach. Three different applications are performed to verify the validity of the proposed method from the perspective of optimal alternative rankings and sensitivity to changes in criteria weights. All results of comparative and sensitivity analyses show that the novel method is moderately sensitive to changes in criteria weights and compatible with other methods.
Journal:Informatica
Volume 36, Issue 3 (2025), pp. 657–676
Abstract
Most classification algorithms involve subjective inputs or hyperparameters to be determined prior to performing the classification. When taking different input or hyperparameter values, each classification algorithm will comprise a collection of classifiers. In this work, we propose a data-driven methodology for assessing similarity in consensus agreement within such a collection of classifiers, and between two classification algorithms, conditional on the dataset of interest. The core of our approach lies in considering the variability introduced by different hyperparameter values for each algorithm when performing such comparisons. We address these problems by evaluating the similarity through consensus agreement and by proposing the application of asymmetric similarity indices based on the Jaccard coefficient. We present the proposed methodology on two publicly available datasets.
Journal:Informatica
Volume 36, Issue 3 (2025), pp. 677–712
Abstract
Fair comparison with state-of-the-art evolutionary algorithms is crucial, but is obstructed by differences in problems, parameters, and stopping criteria across studies. Metaheuristic frameworks can help, but often lack clarity on algorithm versions, improvements, or deviations. Some also restrict parameter configuration. We analysed source codes and identified inconsistencies between implementations. Performance comparisons across frameworks, even with identical settings, revealed significant differences, sometimes even with the authors’ own code. This questions the validity of comparisons using such frameworks. We provide guidelines to improve open-source metaheuristics, aiming to support more credible and reliable comparative studies.
Pub. online:6 Jan 2025Type:Research ArticleOpen Access
Journal:Informatica
Volume 36, Issue 3 (2025), pp. 713–736
Abstract
Multi-criteria group decision-making has gained considerable attention due to its ability to aggregate diverse expert opinions and establish a preference order among alternatives. While probabilistic hesitant fuzzy (PHF) sets offer increased flexibility and generality for representing criteria values compared to traditional fuzzy and hesitant fuzzy set theories, existing aggregation techniques often fail to enhance consensus among biased expert judgments. Motivated by the need for more effective consensus-based decision-making, this paper proposes a new framework that integrates PHF set theory with Aczel-Alsina weighted averaging and geometric aggregation operators. These operators, known for their flexibility and the inclusion of an adjustable parameter, are particularly well-suited for addressing real-world decision-making challenges. The framework employs a cross-entropy based model to determine criteria weights and multi-objective optimization by ratio analysis plus the full multiplicative form (MULTIMOORA) method to establish priority orders of alternatives. The proposed framework is demonstrated through a case study on manufacturing outsourcing vendor selection. The results show that Bertrandt is the most suitable vendor, with a score of 0.2390, and resources consumption is identified as the most critical criterion, with a weight of 0.20. To validate the robustness of the proposed framework, sensitivity and comparison analyses have also been conducted.
Journal:Informatica
Volume 36, Issue 3 (2025), pp. 737–764
Abstract
Anonymous multi-recipient signcryption (AMRS) is an important scheme of public-key cryptography (PKC) and applied for many modern digital applications. In an AMRS scheme, a broadcast management centre (BMC) may sign and encrypt a plaintext data (or file) to a set of multiple recipients. Meanwhile, only these recipients in the set can decrypt the plaintext data and authenticate the BMC while offering anonymity of their identities. In the past, some AMRS schemes based on various PKCs have been proposed. Recently, due to side-channel attacks, the existing cryptographic mechanisms could be broken so that leakage-resilient PKC resisting such attacks has attracted the attention of cryptographic researches. However, the work on the design of leakage-resilient AMRS (LR-AMRS) schemes is little and only suitable for multiple recipients under a single PKC. In this paper, the first leakage-resilient and seamlessly compatible AMRS (LRSC-AMRS) in heterogeneous PKCs is proposed. In the proposed scheme, multiple recipients can be members of two heterogeneous PKCs, namely, the public-key infrastructure PKC (PKI-PKC) or the certificateless PKC (CL-PKC). Also, we present a seamlessly compatible upgradation procedure from the PKI-PKC to the CL-PKC. The proposed scheme achieves three security properties under side-channel attacks, namely, encryption confidentiality, recipient anonymity and sender (i.e. BMC) authentication, which are formally shown by the associated security theorems. Finally, by comparing with related schemes, it is shown that the proposed LRSC-AMRS scheme is suitable for heterogeneous recipients and the computational cost of each recipient’s unsigncryption algorithm is constant $O(1)$.