Pub. online:1 Jan 2007Type:Research ArticleOpen Access
Volume 18, Issue 2 (2007), pp. 279–288
In this paper an exact and complete analysis of the Lloyd–Max's algorithm and its initialization is carried out. An effective method for initialization of Lloyd–Max's algorithm of optimal scalar quantization for Laplacian source is proposed. The proposed method is very simple method of making an intelligent guess of the starting points for the iterative Lloyd–Max's algorithm. Namely, the initial values for the iterative Lloyd–Max's algorithm can be determined by the values of compandor's parameters. It is demonstrated that by following that logic the proposed method provides a rapid convergence of the Lloyd–Max's algorithm.
Pub. online:1 Jan 1993Type:Research ArticleOpen Access
Volume 4, Issues 3-4 (1993), pp. 360–383
An analytical equation for a generalization error of minimum empirical error classifier is derived for a case when true classes are spherically Gaussian. It is compared with the generalization error of a mean squared error classifier – a standard Fisher linear discriminant function. In a case of spherically distributed classes the generalization error depends on a distance between the classes and a number of training samples. It depends on an intrinsic dimensionality of a data only via initialization of a weight vector. If initialization is successful the dimensionality does not effect the generalization error. It is concluded advantageous conditions to use artificial neural nets are to classify patterns in a changing environment, when intrinsic dimensionality of the data is low or when the number of training sample vectors is really large.