Structural break detection is an important time series analysis task. It can be treated as a multi-objective optimization problem, in which we ought to find a time series segmentation such that time series theoretical models constructed on each segment are well-fitted and the segments are long enough to bear meaningful information. Metaheuristic optimization can help us solve this problem. This paper introduces a suite of new cost functions for the structural break detection task. We demonstrate that the new cost functions allow for achieving quantitatively better precision than the cost functions employed in the literature of this domain. We show particular advantages of each new cost function. Furthermore, the paper promotes the use of Particle Swarm Optimization (PSO) in the domain of structural break detection, which so far has relied on the Genetic Algorithm (GA). Our experiments show that PSO outperforms GA for many analysed time series examples. Last but not least, we introduce a non-trivial generalization of the top-performing state-of-the-art approach to the structural break detection problem based on the Minimum Description Length (MDL) rule with autoregressive (AR) model to MDL ARIMA (autoregressive integrated moving average) model.
Pub. online:5 Jan 2022Type:Research ArticleOpen Access
Journal:Informatica
Volume 33, Issue 3 (2022), pp. 523–543
Abstract
In this paper we propose modifications of the well-known algorithm of particle swarm optimization (PSO). These changes affect the mapping of the motion of particles from continuous space to binary space for searching in it, which is widely used to solve the problem of feature selection. The modified binary PSO variations were tested on the dataset SVC2004 dedicated to the problem of user authentication based on dynamic features of a handwritten signature. In the example of k-nearest neighbours (kNN), experiments were carried out to find the optimal subset of features. The search for the subset was considered as a multicriteria optimization problem, taking into account the accuracy of the model and the number of features.
Pub. online:9 Dec 2021Type:Research ArticleOpen Access
Journal:Informatica
Volume 32, Issue 4 (2021), pp. 817–847
Abstract
A method for counterfactual explanation of machine learning survival models is proposed. One of the difficulties of solving the counterfactual explanation problem is that the classes of examples are implicitly defined through outcomes of a machine learning survival model in the form of survival functions. A condition that establishes the difference between survival functions of the original example and the counterfactual is introduced. This condition is based on using a distance between mean times to event. It is shown that the counterfactual explanation problem can be reduced to a standard convex optimization problem with linear constraints when the explained black-box model is the Cox model. For other black-box models, it is proposed to apply the well-known Particle Swarm Optimization algorithm. Numerical experiments with real and synthetic data demonstrate the proposed method.
Pub. online:1 Jan 2017Type:Research ArticleOpen Access
Journal:Informatica
Volume 28, Issue 3 (2017), pp. 415–438
Abstract
The Improved Artificial Bee Colony (IABC) algorithm is a variant of the well-known Artificial Bee Colony (ABC) algorithm. In IABC, a new initialization approach and a new search mechanism were added to the ABC for avoiding local optimums and a better convergence speed. New parameters were added for the new search mechanism. Specified values of these newly added parameters have a direct impact on the performance of the IABC algorithm. For better performance of the algorithm, parameter values should be subjected to change from problem to problem and also need to be updated during the run of the algorithm. In this paper, two novel parameter control methods and related algorithms have been developed in order to increase the performance of the IABC algorithm for large scale optimization problems. One of them is an adaptive parameter control which updates parameter values according to the feedback coming from the search process during the run of the algorithm. In the second method, the management of the parameter values is left to the algorithm itself, which is called self-adaptive parameter control. The adaptive IABC algorithms were examined and compared to other ABC variants and state-of-the-art algorithms on a benchmark functions suite. Through the analysis of the results of the experiments, the adaptive IABC algorithms outperformed almost all ABC variants and gave competitive results with state-of-the-art algorithms from the literature.