Pub. online:15 Jun 2023Type:Research ArticleOpen Access
Volume 34, Issue 2 (2023), pp. 285–315
Over the past decades, many methods have been proposed to solve the linear or nonlinear mixing of spectra inside the hyperspectral data. Due to a relatively low spatial resolution of hyperspectral imaging, each image pixel may contain spectra from multiple materials. In turn, hyperspectral unmixing is finding these materials and their abundances. A few main approaches to performing hyperspectral unmixing have emerged, such as nonnegative matrix factorization (NMF), linear mixture modelling (LMM), and, most recently, autoencoder networks. These methods use different approaches in finding the endmember and abundance of information from hyperspectral images. However, due to the huge variation of hyperspectral data being used, it is difficult to determine which methods perform sufficiently on which datasets and if they can generalize on any input data to solve hyperspectral unmixing problems. By trying to mitigate this problem, we propose a hyperspectral unmixing algorithm testing methodology and create a standard benchmark to test already available and newly created algorithms. A few different experiments were created, and a variety of hyperspectral datasets in this benchmark were used to compare openly available algorithms and to determine the best-performing ones.
Volume 26, Issue 4 (2015), pp. 649–662
A multitude of heuristic stochastic optimization algorithms have been described in literature to obtain good solutions of the box-constrained global optimization problem often with a limit on the number of used function evaluations. In the larger question of which algorithms behave well on which type of instances, our focus is here on the benchmarking of the behavior of algorithms by applying experiments on test instances. We argue that a good minimum performance benchmark is due to pure random search; i.e. algorithms should do better. We introduce the concept of the cumulative distribution function of the record value as a measure with the benchmark of pure random search and the idea of algorithms being dominated by others. The concepts are illustrated using frequently used algorithms.