Journal:Informatica
Volume 14, Issue 2 (2003), pp. 223–236
Abstract
A quick gradient training algorithm for a specific neural network structure called an extra reduced size lattice‐ladder multilayer perceptron is introduced. Presented derivation of the algorithm utilizes recently found by author simplest way of exact computation of gradients for rotation parameters of lattice‐ladder filter. Developed neural network training algorithm is optimal in terms of minimal number of constants, multiplication and addition operations, while the regularity of the structure is also preserved.
Journal:Informatica
Volume 3, Issue 2 (1992), pp. 275–279
Abstract
In some recent papers a discussion on global minimization algorithms for a broad class of functions was started. An idea is presented here why such a case is different from a case of Lipshitzian functions in respect with the convergence and why for a broad class of functions an algorithm converges to global minimum of an objective function if it generates an everywhere dense sequence of trial points.