Journal:Informatica
Volume 24, Issue 2 (2013), pp. 315–337
Abstract
We consider a generalization of heterogeneous meta-programs by (1) introducing an extra level of abstraction within the meta-program structure, and (2) meta-program transformations. We define basic terms, formalize transformation tasks, consider properties of meta-program transformations and rules to manage complexity through the following transformation processes: (1) reverse transformation, when a correct one-stage meta-program M1 is transformed into the equivalent two-stage meta-meta-program M2; (2) two-stage forward transformations, when M2 is transformed into a set of meta-programs, and each meta-program is transformed into a set of target programs. The results are as follows: (a) formalization of the transformation processes within the heterogeneous meta-programming paradigm; (b) introduction and approval of equivalent transformations of meta-programs into meta-meta-programs and vice versa; (c) introduction of metrics to evaluate complexity of meta-specifications. The results are approved by examples, theoretical reasoning and experiments.
Journal:Informatica
Volume 7, Issue 2 (1996), pp. 137–154
Abstract
There exist two principally different approaches to design the classification rule. In classical (parametric) approach one parametrizes conditional density functions of the pattern classes. In a second (nonparametric) approach one parametrizes a type of the discriminant function and minimizes an empirical classification error to find unknown coefficients of the discriminant function. There is a number of asymptotic expansions for an expected probability of misclassification of parametric classifiers. Error bounds exist for nonparametric classifiers so far. In this paper an exact analytical expression for the expected error EPN of nonparametric linear zero empirical error classifier is derived for a case when the distributions of pattern classes are spherically Gaussian. The asymptotic expansion of EPN is obtained for a case when both the number of learning patterns N and their, dimensionality p increase infinitely. The tables for exact and approximate expected errors as functions of N, dimensionality p and the distance δ between pattern classes are presented and compared with the expected error of the Fisher's linear classifier and indicate that the minimum empirical error classifier can be used even in cases where dimensionality exceeds the number of learning examples.
Journal:Informatica
Volume 4, Issues 3-4 (1993), pp. 360–383
Abstract
An analytical equation for a generalization error of minimum empirical error classifier is derived for a case when true classes are spherically Gaussian. It is compared with the generalization error of a mean squared error classifier – a standard Fisher linear discriminant function. In a case of spherically distributed classes the generalization error depends on a distance between the classes and a number of training samples. It depends on an intrinsic dimensionality of a data only via initialization of a weight vector. If initialization is successful the dimensionality does not effect the generalization error. It is concluded advantageous conditions to use artificial neural nets are to classify patterns in a changing environment, when intrinsic dimensionality of the data is low or when the number of training sample vectors is really large.
Journal:Informatica
Volume 3, Issue 3 (1992), pp. 301–337
Abstract
Small training sample effects common in statistical classification and artificial neural network classifier design are discussed. A review of known small sample results are presented, and peaking phenomena related to the increase in the number of features and the number of neurons is discussed.