Journal:Informatica
Volume 20, Issue 2 (2009), pp. 273–292
Abstract
The paper studies stochastic optimization problems in Reproducing Kernel Hilbert Spaces (RKHS). The objective function of such problems is a mathematical expectation functional depending on decision rules (or strategies), i.e. on functions of observed random parameters. Feasible rules are restricted to belong to a RKHS. This kind of problems arises in on-line decision making and in statistical learning theory. We solve the problem by sample average approximation combined with Tihonov's regularization and establish sufficient conditions for uniform convergence of approximate solutions with probability one, jointly with a rule for downward adjustment of the regularization factor with increasing sample size.
Journal:Informatica
Volume 18, Issue 4 (2007), pp. 603–614
Abstract
The paper considers application of stochastic optimization to system of automatic recognition of ischemic stroke area on computed tomography (CT) images. The algorithm of recognition depends on five inputs that influence the results of automatic detection. The quality of recognition is measured by size of conjunction of ethalone image and the image calculated by the program of automatic detection. The method of Simultaneous Perturbation Stohastic Approximation algorithm with the Metropolis rule has been applied to the optimization of the quality of image recognition. The Monte-Carlo simulation experiment was performed in order to evaluate the properties of developed algorithm.
Journal:Informatica
Volume 3, Issue 1 (1992), pp. 98–118
Abstract
We consider a class of identification algorithms for distributed parameter systems. Utilizing stochastic optimization techniques, sequences of estimators are constructed by minimizing appropriate functionals. The main effort is to develop weak and strong invariance principles for the underlying algorithms. By means of weak convergence methods, a functional central limit theorem is established. Using the Skorohod imbedding, a strong invariance principle is obtained. These invariance principles provide very precise rates of convergence results for parameter estimates, yielding important information for experimental design.