Journal:Informatica
Volume 13, Issue 4 (2002), pp. 465–484
Abstract
The presented article is about a research using artificial neural network (ANN) methods for compound (technical and fundamental) analysis and prognosis of Lithuania's National Stock Exchange (LNSE) indices LITIN, LITIN-A and LITIN-VVP. We employed initial pre-processing (analysis for entropy and correlation) for filtering out model input variables (LNSE indices, macroeconomic indicators, Stock Exchange indices of other countries such as the USA – Dow Jones and S&P, EU – Eurex, Russia – RTS). Investigations for the best approximation and forecasting capabilities were performed using different backpropagation ANN learning algorithms, configurations, iteration numbers, data form-factors, etc. A wide spectrum of different results has shown a high sensitivity to ANN parameters. ANN autoregressive, autoregressive causative and causative trend model performances were compared in the approximation and forecasting by a linear discriminant analysis.
Journal:Informatica
Volume 13, Issue 3 (2002), pp. 275–286
Abstract
In the paper, we analyze the software that realizes the self-organizing maps: SOM-PAK, SOM-TOOLBOX, Viscovery SOMine, Nenet, and two academic systems. Most of the software may be found in the Internet. These are freeware, shareware or demo. The self-organizing maps assist in data clustering and analyzing data similarities. The software differs one from another in the realization and visualization capabilities. The data on coastal dunes and their vegetation in Finland are used for the experimental comparison of the graphical result presentation of the software. Similarities of the systems and their differences, advantages and imperfections are exposed.
Journal:Informatica
Volume 11, Issue 2 (2000), pp. 219–232
Abstract
Color constancy is the perceived stability of the color of objects under different illuminants. Four-layer neural network for color constancy has been developed. It has separate input channels for the test chip and for the background. Input of network was RGB receptors. Second layer consisted of color opponent cells and output have three neurons signaling x, y, Y coordinates (1931 CIE). Network was trained with the back-propagation algorithm. For training and testing we used nine illuminants with wide spectrum. Neural network was able to achieve color constancy. Input of background coordinates and nonlinearity of network have crucial influence for training.
Journal:Informatica
Volume 9, Issue 4 (1998), pp. 415–424
Abstract
Comparative study of the recognition of nonsemantic geometrical figures by the human subjects and ART neural network was carried out. The results of computer simulation experiments with ART neural network showed well correspondence with the psychophysical data on the recognition of different complexity visual patterns: in both cases the patterns of medium complexity were recognized with the highest accuracy. On the contrary, the recognition of the patterns by their informative fragments demonstrated different recognition strategies employed by natural and artificial neural systems. For biological systems, it is necessary the presence of not only distinctive features in visual patterns but the redundant features as well for successive recognition. ART neural network ignores redundant features and recognizes visual patterns with equal accuracy whether the whole pattern or only the informative fragment of any completeness is present.
Journal:Informatica
Volume 7, Issue 4 (1996), pp. 525–541
Abstract
In the present paper, the method of structure analysis for multivariate functions was applied to rational approximation in classification problems. Then the pattern recognition and generalisation ability was investigated experimentally in numerical recognition. A comparison with Hopfield Net was carried out. The overall results of using of new approach may be treated as a success.
Journal:Informatica
Volume 5, Issues 1-2 (1994), pp. 241–255
Abstract
Neural networks are often characterized as highly nonlinear systems of fairly large amount of parameters (in order of 103 – 104). This fact makes the optimization of parameters to be a nontrivial problem. But the astonishing moment is that the local optimization technique is widely used and yields reliable convergence in many cases. Obviously, the optimization of neural networks is high-dimensional, multi-extremal problem, so, as usual, the global optimization methods would be applied in this case. On the basis of Perceptron-like unit (which is the building block for the most architectures of neural networks) we analyze why the local optimization technique is so successful in the field of neural networks. The result is that a linear approximation of the neural network can be sufficient to evaluate the start point for the local optimization procedure in the nonlinear regime. This result can help in developing faster and more robust algorithms for the optimization of neural network parameters.