Pub. online:5 Aug 2022Type:Research ArticleOpen Access
Journal:Informatica
Volume 16, Issue 2 (2005), pp. 159–174
Abstract
The more realistic neural soma and synaptic nonlinear relations and an alternative mean field theory (MFT) approach relevant for strongly interconnected systems as a cortical matter are considered. The general procedure of averaging the quenched random states in the fully-connected networks for MFT, as usually, is based on the Boltzmann Machine learning. But this approach requires an unrealistically large number of samples to provide a reliable performance. We suppose an alternative MFT with deterministic features instead of stochastic nature of searching a solution a set of large number equations. Of course, this alternative theory will not be strictly valid for infinite number of elements. Another property of generalization is an inclusion of the additional member in the effective Hamiltonian allowing to improve the stochastic hill-climbing search of the solution not dropping into local minima of the energy function. Especially, we pay attention to increasing of neural networks retrieval capability transforming the replica-symmetry model by including of different nonlinear elements. Some results of numerical modeling as well as the wide discussion of neural systems storage capacity are presented.
Journal:Informatica
Volume 25, Issue 3 (2014), pp. 401–414
Abstract
We propose an adaptive inverse control scheme, which employs a neural network for the system identification phase and updates its weights in online mode. The theoretical basis of the method is given and its performance is illustrated by means of its application to different control problems showing that our proposal is able to overcome the problems generated by dynamic nature of the process or by physical changes of the system which originate important modifications in the process. A comparative experimental study is presented in order to show the more stable behavior of the proposed method in several working ranks.
Journal:Informatica
Volume 22, Issue 4 (2011), pp. 507–520
Abstract
The most classical visualization methods, including multidimensional scaling and its particular case – Sammon's mapping, encounter difficulties when analyzing large data sets. One of possible ways to solve the problem is the application of artificial neural networks. This paper presents the visualization of large data sets using the feed-forward neural network – SAMANN. This back propagation-like learning rule has been developed to allow a feed-forward artificial neural network to learn Sammon's mapping in an unsupervised way. In its initial form, SAMANN training is computation expensive. In this paper, we discover conditions optimizing the computational expenditure in visualization even of large data sets. It is shown possibility to reduce the original dimensionality of data to a lower one using small number of iterations. The visualization results of real-world data sets are presented.
Journal:Informatica
Volume 15, Issue 4 (2004), pp. 551–564
Abstract
Text categorization – the assignment of natural language documents to one or more predefined categories based on their semantic content – is an important component in many information organization and management tasks. Performance of neural networks learning is known to be sensitive to the initial weights and architecture. This paper discusses the use multilayer neural network initialization with decision tree classifier for improving text categorization accuracy. Decision tree from root node until a final leave is used for initialization of each single unit. Growing decision trees with increasingly larger amounts of training data will result in larger decision tree sizes. As a result, the neural networks constructed from these decision trees are often larger and more complex than necessary. Appropriate choice of certainty factor is able to produce trees that are essentially constant in size in the face of increasingly larger training sets. Experimental results support the conclusion that error based pruning can be used to produce appropriately sized trees, which are directly mapped to optimal neural network architecture with good accuracy. The experimental evaluation demonstrates this approach provides better classification accuracy with Reuters‐21578 corpus, one of the standard benchmarks for text categorization tasks. We present results comparing the accuracy of this approach with multilayer neural network initialized with traditional random method and decision tree classifiers.
Journal:Informatica
Volume 5, Issues 1-2 (1994), pp. 241–255
Abstract
Neural networks are often characterized as highly nonlinear systems of fairly large amount of parameters (in order of 103 – 104). This fact makes the optimization of parameters to be a nontrivial problem. But the astonishing moment is that the local optimization technique is widely used and yields reliable convergence in many cases. Obviously, the optimization of neural networks is high-dimensional, multi-extremal problem, so, as usual, the global optimization methods would be applied in this case. On the basis of Perceptron-like unit (which is the building block for the most architectures of neural networks) we analyze why the local optimization technique is so successful in the field of neural networks. The result is that a linear approximation of the neural network can be sufficient to evaluate the start point for the local optimization procedure in the nonlinear regime. This result can help in developing faster and more robust algorithms for the optimization of neural network parameters.