Pub. online:1 Jan 2013Type:Research ArticleOpen Access
Volume 24, Issue 2 (2013), pp. 315–337
We consider a generalization of heterogeneous meta-programs by (1) introducing an extra level of abstraction within the meta-program structure, and (2) meta-program transformations. We define basic terms, formalize transformation tasks, consider properties of meta-program transformations and rules to manage complexity through the following transformation processes: (1) reverse transformation, when a correct one-stage meta-program M1 is transformed into the equivalent two-stage meta-meta-program M2; (2) two-stage forward transformations, when M2 is transformed into a set of meta-programs, and each meta-program is transformed into a set of target programs. The results are as follows: (a) formalization of the transformation processes within the heterogeneous meta-programming paradigm; (b) introduction and approval of equivalent transformations of meta-programs into meta-meta-programs and vice versa; (c) introduction of metrics to evaluate complexity of meta-specifications. The results are approved by examples, theoretical reasoning and experiments.
Pub. online:1 Jan 1999Type:Research ArticleOpen Access
Volume 10, Issue 2 (1999), pp. 245–269
Structurization of the sample covariance matrix reduces the number of the parameters to be estimated and, in a case the structurization assumptions are correct, improves small sample properties of a statistical linear classifier. Structured estimates of the sample covariance matrix are used to decorellate and scale the data, and to train a single layer perceptron classifier afterwards. In most from ten real world pattern classification problems tested, the structurization methodology applied together with the data transformations and subsequent use of the optimally stopped single layer perceptron resulted in a significant gain in comparison with the best statistical linear classifier – the regularized discriminant analysis.
Pub. online:1 Jan 1993Type:Research ArticleOpen Access
Volume 4, Issues 3-4 (1993), pp. 360–383
An analytical equation for a generalization error of minimum empirical error classifier is derived for a case when true classes are spherically Gaussian. It is compared with the generalization error of a mean squared error classifier – a standard Fisher linear discriminant function. In a case of spherically distributed classes the generalization error depends on a distance between the classes and a number of training samples. It depends on an intrinsic dimensionality of a data only via initialization of a weight vector. If initialization is successful the dimensionality does not effect the generalization error. It is concluded advantageous conditions to use artificial neural nets are to classify patterns in a changing environment, when intrinsic dimensionality of the data is low or when the number of training sample vectors is really large.