Journal:Informatica
Volume 36, Issue 3 (2025), pp. 677–712
Abstract
Fair comparison with state-of-the-art evolutionary algorithms is crucial, but is obstructed by differences in problems, parameters, and stopping criteria across studies. Metaheuristic frameworks can help, but often lack clarity on algorithm versions, improvements, or deviations. Some also restrict parameter configuration. We analysed source codes and identified inconsistencies between implementations. Performance comparisons across frameworks, even with identical settings, revealed significant differences, sometimes even with the authors’ own code. This questions the validity of comparisons using such frameworks. We provide guidelines to improve open-source metaheuristics, aiming to support more credible and reliable comparative studies.