The purpose of this study is to evaluate the significant dimensions of customer-centric innovation in renewable energy projects. We construct a novel fuzzy decision-making model to address this objective. In the first stage, significant indicators are identified using balanced scorecard-based determinants and weighted through the multi-step wise weight assessment ratio analysis (M-SWARA) method integrated with quantum spherical fuzzy sets. In the second stage, the energy efficiency of renewable energy alternatives is assessed for customer-centric innovation performance via technique for order preference by similarity to ideal solution (TOPSIS). We offer priority strategies for green energy investors to enhance customer-centric innovation with more reasonable costs. Methodologically, the proposed model provides important advantages by effectively handling uncertainty through quantum spherical fuzzy structures, incorporating the golden ratio in degree calculations, and capturing interdependencies among criteria through the improved M-SWARA approach. The findings reveal that customization is the most critical indicator for improving customer-centric innovation performance, followed by efficiency, while optimization and innovation have relatively lower importance. The ranking results indicate that solar energy projects demonstrate the highest performance in managing customer-centric innovation, followed by wind and geothermal energy alternatives.
Taking into account the irrational elements and regret aversion of decision makers (DMs) during the decision-making process, regret theory (RT) and the TODIM methods have been integrated into a decision-making framework to develop an enhanced multi-attribute decision-making (MADM) method (PDHL-RT-TODIM) within probabilistic double hierarchy linguistic (PDHL) environment. Specifically, extending the perceived utility function in RT to determine the regret and joy values of the overall advantage flow of alternatives calculated by TODIM method in PDHL environment. Then, a correlation coefficient (CC) and standard deviation (SD) integral (CCSD) method was created using the probabilistic double hierarchy linguistic set (PDHLTS) distance metric and PDHL weight arithmetic operator to establish the objective weights of attributes. Additionally, the effectiveness of this proposed method was illustrated through numerical examples for information system investment project selection, and its stability, efficiency, and benefits were further confirmed through sensitivity analysis and comparisons with existing methods.
Decision-making under strict uncertainty involves evaluating a set of alternatives without knowledge of the probability of scenarios using crisp evaluations. Our work reformulates traditional decision rules to a fuzzy environment, retaining the interpretability of classical principles while incorporating imprecision. Our methodological proposal provides a unified, flexible, and mathematically consistent framework for decision-making under imprecise payoffs. We adapt a total ordering mechanism for trapezoidal fuzzy numbers and admissible interval orders. Our application case study to portfolio selection under fuzzy strict uncertainty demonstrates how the proposed fuzzy generalization can handle financial imprecision and investor risk attitudes through ranking functions.
In order to better solve the multi-attribute decision-making (MADM) issues in real life, this paper proposes the probabilistic spherical hesitant fuzzy set (PSHFS) theory based on spherical HFS (SHFS) and probabilistic HFS (PHFS). Firstly, PSHFS is developed, and its basic operations of PHSF element (PSHFE) are proposed. Secondly, generalized PSHF weighted averaging (GPSHFWA) and generalized PSHF weighted geometric (GPSHFWG) operators are constructed, and their excellent properties and some special forms are investigated. Thirdly, for MADM problems with different priorities of related evaluation criteria, we propose generalized PSHF prioritized weighted averaging (GPSHFPWA) and geometric (GPSHFPWG) operators, and investigate their excellent properties and some special operators. Fourthly, two new MADM techniques are constructed dependent on the proposed two types of operators in practical MADM problems. Finally, the effectiveness of the two MADM techniques constructed is tested through an application example of the green enterprise credit selection (GECS). The sensitivity analysis of parameter shows the influence on different values of parameter on the optimal alternatives by setting different parameter values, and shows the flexibility of the proposed MADM techniques. Meanwhile, the two MADM techniques are compared with several existing MADM techniques to prove the advantages of the two MADM techniques.
Magnetic Resonance Imaging (MRI) is crucial for clinical diagnostics, offering high-resolution anatomical and functional imaging without ionizing radiation. However, prolonged acquisition times in conventional MRI lead to motion artifacts, limiting efficiency and reliability. While deep learning models such as GANs and DDPMs show promise in MRI synthesis, DDPMs suffer from stochastic variability that affects image consistency. This study proposes Synthetic Modality Diffusion (SynthModDiff), a novel multi-domain image-to-image translation framework featuring a two-stage diffusion process with a noise-aware Forward Process and Reverse Process to enhance fidelity and reduce residual noise. Experiments across multiple datasets demonstrate state-of-the-art performance in NMAE, SSIM, and PSNR metrics, while preserving fine anatomical details, making SynthModDiff highly suitable for clinical applications like radiotherapy planning.
Pub. online:6 Mar 2026Type:Research ArticleOpen Access
Journal:Informatica
Volume 37, Issue 1 (2026), pp. 229–249
Abstract
Forward-looking coding has recently been introduced as a source modelling paradigm that exploits predictions of forthcoming symbols. In this paper, we extend this methodology to word-level alphabets, enabling improved compression performance for large and variable-length symbol sets. We present a space-efficient scheme for encoding header information, with particular emphasis on the accurate representation of symbol frequency distributions. In addition, we propose an alternative ordering strategy for word-based dictionaries that leverages the adaptive nature of forward-looking compression. We further show how these techniques can be integrated with a word-based Prediction by Partial Matching model of order one, while avoiding the zero-frequency problem. Experimental results confirm the effectiveness of the proposed approach across multiple datasets.
Pub. online:11 Feb 2026Type:Research ArticleOpen Access
Journal:Informatica
Volume 37, Issue 1 (2026), pp. 87–107
Abstract
Proliferation of wearable healthcare devices has created the need to deliver artificial intelligence applications for these resource-constrained devices to achieve faster, localized decision-making, by bringing computation closer to the data sources, for improved responsiveness and privacy. This contribution presents the results of an experimental evaluation of artificial neural network compression techniques, including quantization, structured pruning, and knowledge distillation, applied to multi-label classification of electrocardiogram (ECG) signals. The experiments were carried out on the PTB-XL dataset using three deep learning models, i.e. an LSTM-based recurrent neural network, a 1D convolutional neural network, and a 1D residual neural network. The results show how the compression methods impact model quality and highlight opportunities to reduce model size and accelerate inference, thereby enabling effective deployment on resource-constrained, edge devices.
Statistical model checking offers an alternative to traditional model checking for large stochastic systems, addressing state space explosion and approximating quantitative properties. This paper proposes machine learning approaches using decision trees to approximate zero-reachability states, offering both computational efficiency and interpretability. Statistical analysis is used as an alternative approach to establish simulation run length bounds to control computation errors. Experimental results across standard Markov models demonstrate that our decision structures maintain high correctness (99% in most cases), reduce runtime, and have minimal memory overhead. Even when some methods show limitations, alternative approaches within our framework yield effective results.
This research presents a novel hybrid portfolio optimization framework that combines the Hierarchical Risk Parity (HRP) algorithm with two Multi-Criteria Decision-Making (MCDM) methods, MEREC and WEDBA, specifically to overcome fundamental shortcomings in the standard HRP model. The central goal is to alleviate the chaining problem and resolve HRP’s difficulty in identifying the optimal number of clusters, issues known to negatively affect portfolio diversification and risk allocation. To achieve this structural improvement, the Elbow method is integrated directly into the HRP process, ensuring a robust cluster structure is defined before any weight allocation occurs. The MEREC method is then utilized to calculate objective criterion weights, while the WEDBA approach is employed to assess the financial performance of individual assets within each cluster generated by HRP. This HRP–MCDM algorithm is tested using daily closing price data for stocks on the BIST 100 Index covering the 2018–2022 period. The performance of portfolios generated across seven distinct linkage methods (Ward, single, complete, average, weighted, centroid, and median) is rigorously benchmarked against the outcomes from the traditional HRP approach. Findings demonstrate that the HRP–MCDM framework significantly boosts both return levels and risk-adjusted metrics, especially when using the single and Ward linkage method, thereby surpassing the standard HRP algorithm in the majority of test cases. By strategically blending machine-learning-based risk clustering with objective, multi-criteria evaluation, this study makes a vital methodological contribution to the portfolio optimization domain, equipping investors with a more stable, transparent, and performance-focused asset allocation instrument.
Verification in modern e-voting protocols allows voters and the general public to independently confirm the elections results. However, verification alone is insufficient to hold entities accountable for misconduct, or to protect honest participants from false accusations. This limitation is especially critical in voting protocols with multiple authorities, where the ability to identify the specific misbehaving entity is essential. We present DiReCT, the first multiparty protocol that integrates dispute resolution with individual accountability. Our protocol addresses two previously unresolved disputes: authorities blocking access to the election; and authorities denying the casting of a ballot. In addition, DiReCT improves timeliness, allowing misconducts to be proactively detected during the elections. As a result, voters can identify and recover from attacks that prevent their ballots from being recorded. Notably, DiReCT achieves these capabilities with low trust assumptions on the authorities.