Forward-looking coding has recently been introduced as a source modelling paradigm that exploits predictions of forthcoming symbols. In this paper, we extend this methodology to word-level alphabets, enabling improved compression performance for large and variable-length symbol sets. We present a space-efficient scheme for encoding header information, with particular emphasis on the accurate representation of symbol frequency distributions. In addition, we propose an alternative ordering strategy for word-based dictionaries that leverages the adaptive nature of forward-looking compression. We further show how these techniques can be integrated with a word-based Prediction by Partial Matching model of order one, while avoiding the zero-frequency problem. Experimental results confirm the effectiveness of the proposed approach across multiple datasets.
Statistical model checking offers an alternative to traditional model checking for large stochastic systems, addressing state space explosion and approximating quantitative properties. This paper proposes machine learning approaches using decision trees to approximate zero-reachability states, offering both computational efficiency and interpretability. Statistical analysis is used as an alternative approach to establish simulation run length bounds to control computation errors. Experimental results across standard Markov models demonstrate that our decision structures maintain high correctness (99% in most cases), reduce runtime, and have minimal memory overhead. Even when some methods show limitations, alternative approaches within our framework yield effective results.
Proliferation of wearable healthcare devices has created the need to deliver artificial intelligence applications for these resource-constrained devices to achieve faster, localized decision-making, by bringing computation closer to the data sources, for improved responsiveness and privacy. This contribution presents the results of an experimental evaluation of artificial neural network compression techniques, including quantization, structured pruning, and knowledge distillation, applied to multi-label classification of electrocardiogram (ECG) signals. The experiments were carried out on the PTB-XL dataset using three deep learning models, i.e. an LSTM-based recurrent neural network, a 1D convolutional neural network, and a 1D residual neural network. The results show how the compression methods impact model quality and highlight opportunities to reduce model size and accelerate inference, thereby enabling effective deployment on resource-constrained, edge devices.
This research presents a novel hybrid portfolio optimization framework that combines the Hierarchical Risk Parity (HRP) algorithm with two Multi-Criteria Decision-Making (MCDM) methods, MEREC and WEDBA, specifically to overcome fundamental shortcomings in the standard HRP model. The central goal is to alleviate the chaining problem and resolve HRP’s difficulty in identifying the optimal number of clusters, issues known to negatively affect portfolio diversification and risk allocation. To achieve this structural improvement, the Elbow method is integrated directly into the HRP process, ensuring a robust cluster structure is defined before any weight allocation occurs. The MEREC method is then utilized to calculate objective criterion weights, while the WEDBA approach is employed to assess the financial performance of individual assets within each cluster generated by HRP. This HRP–MCDM algorithm is tested using daily closing price data for stocks on the BIST 100 Index covering the 2018–2022 period. The performance of portfolios generated across seven distinct linkage methods (Ward, single, complete, average, weighted, centroid, and median) is rigorously benchmarked against the outcomes from the traditional HRP approach. Findings demonstrate that the HRP–MCDM framework significantly boosts both return levels and risk-adjusted metrics, especially when using the single and Ward linkage method, thereby surpassing the standard HRP algorithm in the majority of test cases. By strategically blending machine-learning-based risk clustering with objective, multi-criteria evaluation, this study makes a vital methodological contribution to the portfolio optimization domain, equipping investors with a more stable, transparent, and performance-focused asset allocation instrument.
Verification in modern e-voting protocols allows voters and the general public to independently confirm the elections results. However, verification alone is insufficient to hold entities accountable for misconduct, or to protect honest participants from false accusations. This limitation is especially critical in voting protocols with multiple authorities, where the ability to identify the specific misbehaving entity is essential. We present DiReCT, the first multiparty protocol that integrates dispute resolution with individual accountability. Our protocol addresses two previously unresolved disputes: authorities blocking access to the election; and authorities denying the casting of a ballot. In addition, DiReCT improves timeliness, allowing misconducts to be proactively detected during the elections. As a result, voters can identify and recover from attacks that prevent their ballots from being recorded. Notably, DiReCT achieves these capabilities with low trust assumptions on the authorities.
Quality Function Deployment (QFD) is a technique used to collect Customer Requirements (CRs) for the product to be designed before the start of the manufacturing processes, and also used to determine whether CRs will be met with correlated or uncorrelated Design Requirements (DRs). In QFD technique, customers tend to explain their expectations from the product by using linguistic expressions instead of using exact numbers. Vagueness and impreciseness in linguistic expressions can be captured perfectly using fuzzy set theory. Pythagorean fuzzy (PF) sets as one of the extensions of ordinary fuzzy sets offer the decision maker a larger membership and non-membership assignment region than ordinary intuitionistic fuzzy sets. In this paper, customer requirements in QFD analysis are prioritized by Best-Worst Method (BWM), which has become a very popular optimization-based weighting method in recent years. In the proposed BWM and QFD methodology, interval-valued Pythagorean fuzzy (IVPF) sets are used for the first time in order to handle the uncertainties in the linguistic judgments. In the application, the two-phase IVPF methodology is proposed to a real life e-scooter design problem addressing 12 customer & 12 design requirements. The proposed PF methodology could determine the weights of customer requirements, and identify which of the design requirements is stronger, and make a competitive analysis to reveal the position of our company in the market under fuzzy environment. Besides, the sensitivity and comparative analyses have demonstrated the dominance of our company over the other competitors.
In the legal domain, ontologies organize legal concepts and their relationships, while knowledge graphs connect these concepts to specific entities in legal documents. This study proposes a solution for integrating ontology and knowledge graph, called Legal-Onto model, to construct a knowledge base of an intelligent retrieval system in the legal domain. The Legal-Onto model combines ontology as the conceptual layer and knowledge graphs as the implementation layer for representing the content of legal documents. This relational model is integrated with a structure of knowledge graph to identify relations between concepts and entities extracted from ontology in the determined domain. Moreover, this research addresses inherent challenges in semantic-based knowledge-driven search. The specific objective is to accurately extract relevant information from legal documents to respond to entered queries. The experimental results show that this method is more effective than state-of-the-art methods in natural language processing and large language models, which are without specific legal domain knowledge.
The open data movement has led to the widespread sharing of data across all sectors, offering great potential for innovation and informed decision-making. Nevertheless, open data quality remains a key challenge. This study provides a systematic overview of 16 recent methodologies for data quality assessment, emphasizing their alignment with ISO/IEC 25012 and ISO 8000 standards, FAIR principles, 5-Star Linked Open Data System, and DCAT vocabulary. We also highlight foundational work and identify adaptable methods suitable for the Slovenian open data portal. By recommending practical approaches, this work provides a strategic basis for improving data quality in regional and national platforms, supporting improved data utilization and transparency for end users.
Traditional loss functions such as mean squared error (MSE) are widely employed, but they often struggle to capture the dynamic characteristics of high-dimensional nonlinear systems. To address this issue, we propose an improved loss function that integrates linear multistep methods, system-consistency constraints, and prediction-phase error control. This construction simultaneously improves training accuracy and long-term stability. Furthermore, the introduction of recursive loss and interpolation strategies brings the model closer to practical prediction scenarios, broadening its applicability. Numerical simulations demonstrate that this construction significantly outperforms both mean square error and existing custom loss functions in terms of performance.
New generation battery technology investments play a key role in the transition process from fossil fuels to renewable energy. The main problem related to the subject is that decision makers experience uncertainty about which of these numerous criteria affecting investment performance are prioritized. The lack of comprehensive models in the literature for systematically prioritizing these criteria creates a significant gap. The aim of this study is to determine the priority strategies to increase the performance of new generation battery technology investments. In this context, an innovative decision-making model is developed by integrating multi-facet fuzzy sets, logarithmic least-squares and WASPAS techniques. This study makes a significant contribution to the literature by prioritizing the performance indicators of new generation battery technology investments via an innovative decision-making model. The development of multi-facet fuzzy sets in this study provides an important contribution to the literature. Moreover, dynamic decision-making opportunity is provided by redefining membership degrees with different parameter sets for each scenario. This provides the opportunity to make clearer decisions based on scenarios and dynamic evaluations in complex decision-making processes. The main findings of the study indicate that circularity and compatibility with existing manufacturing infrastructure are priorities in improving the performance of these projects.