Journal:Informatica
Volume 27, Issue 2 (2016), pp. 257–281
Abstract
The estimation of intrinsic dimensionality of high-dimensional data still remains a challenging issue. Various approaches to interpret and estimate the intrinsic dimensionality are developed. Referring to the following two classifications of estimators of the intrinsic dimensionality – local/global estimators and projection techniques/geometric approaches – we focus on the fractal-based methods that are assigned to the global estimators and geometric approaches. The computational aspects of estimating the intrinsic dimensionality of high-dimensional data are the core issue in this paper. The advantages and disadvantages of the fractal-based methods are disclosed and applications of these methods are presented briefly.
Journal:Informatica
Volume 24, Issue 2 (2013), pp. 315–337
Abstract
We consider a generalization of heterogeneous meta-programs by (1) introducing an extra level of abstraction within the meta-program structure, and (2) meta-program transformations. We define basic terms, formalize transformation tasks, consider properties of meta-program transformations and rules to manage complexity through the following transformation processes: (1) reverse transformation, when a correct one-stage meta-program M1 is transformed into the equivalent two-stage meta-meta-program M2; (2) two-stage forward transformations, when M2 is transformed into a set of meta-programs, and each meta-program is transformed into a set of target programs. The results are as follows: (a) formalization of the transformation processes within the heterogeneous meta-programming paradigm; (b) introduction and approval of equivalent transformations of meta-programs into meta-meta-programs and vice versa; (c) introduction of metrics to evaluate complexity of meta-specifications. The results are approved by examples, theoretical reasoning and experiments.
Journal:Informatica
Volume 18, Issue 2 (2007), pp. 279–288
Abstract
In this paper an exact and complete analysis of the Lloyd–Max's algorithm and its initialization is carried out. An effective method for initialization of Lloyd–Max's algorithm of optimal scalar quantization for Laplacian source is proposed. The proposed method is very simple method of making an intelligent guess of the starting points for the iterative Lloyd–Max's algorithm. Namely, the initial values for the iterative Lloyd–Max's algorithm can be determined by the values of compandor's parameters. It is demonstrated that by following that logic the proposed method provides a rapid convergence of the Lloyd–Max's algorithm.
Journal:Informatica
Volume 10, Issue 2 (1999), pp. 245–269
Abstract
Structurization of the sample covariance matrix reduces the number of the parameters to be estimated and, in a case the structurization assumptions are correct, improves small sample properties of a statistical linear classifier. Structured estimates of the sample covariance matrix are used to decorellate and scale the data, and to train a single layer perceptron classifier afterwards. In most from ten real world pattern classification problems tested, the structurization methodology applied together with the data transformations and subsequent use of the optimally stopped single layer perceptron resulted in a significant gain in comparison with the best statistical linear classifier – the regularized discriminant analysis.
Journal:Informatica
Volume 3, Issue 3 (1992), pp. 301–337
Abstract
Small training sample effects common in statistical classification and artificial neural network classifier design are discussed. A review of known small sample results are presented, and peaking phenomena related to the increase in the number of features and the number of neurons is discussed.