Journal:Informatica
Volume 18, Issue 2 (2007), pp. 203–216
Abstract
In this paper, the information theory interpreted as the neural network systems of the brain is considered for information conveying and storing. Using the probability theory and specific properties of the neural systems, some foundations are presented. The neural network model proposed and computational experiments allow us to draw a conclusion that such an approach can be applied in storing, coding, and transmission of information.
Journal:Informatica
Volume 15, Issue 4 (2004), pp. 551–564
Abstract
Text categorization – the assignment of natural language documents to one or more predefined categories based on their semantic content – is an important component in many information organization and management tasks. Performance of neural networks learning is known to be sensitive to the initial weights and architecture. This paper discusses the use multilayer neural network initialization with decision tree classifier for improving text categorization accuracy. Decision tree from root node until a final leave is used for initialization of each single unit. Growing decision trees with increasingly larger amounts of training data will result in larger decision tree sizes. As a result, the neural networks constructed from these decision trees are often larger and more complex than necessary. Appropriate choice of certainty factor is able to produce trees that are essentially constant in size in the face of increasingly larger training sets. Experimental results support the conclusion that error based pruning can be used to produce appropriately sized trees, which are directly mapped to optimal neural network architecture with good accuracy. The experimental evaluation demonstrates this approach provides better classification accuracy with Reuters‐21578 corpus, one of the standard benchmarks for text categorization tasks. We present results comparing the accuracy of this approach with multilayer neural network initialized with traditional random method and decision tree classifiers.
Journal:Informatica
Volume 14, Issue 1 (2003), pp. 95–110
Abstract
It is a complex non‐linear problem to predict mechanical properties of concrete. As a new approach, the artificial neural networks can extract rules from data, but have difficulties with convergence by the traditional algorithms. The authors defined a new convex function of the grand total error and deduced a global optimization back‐propagation algorithm (GOBPA), which can solve the local minimum problem. For weights' adjustment and errors' computation of the neurons in various layers, a set of formulae are obtained by optimizing the grand total error function over a simple output space instead of a complicated weight space. Concrete strength simulated by neural networks accords with the data of the experiments on concrete, which demonstrates that this method is applicable to concrete properties' prediction meeting the required precision. Computation results show that GOBPA performs better than a linear regression analysis.
Journal:Informatica
Volume 7, Issue 4 (1996), pp. 525–541
Abstract
In the present paper, the method of structure analysis for multivariate functions was applied to rational approximation in classification problems. Then the pattern recognition and generalisation ability was investigated experimentally in numerical recognition. A comparison with Hopfield Net was carried out. The overall results of using of new approach may be treated as a success.
Journal:Informatica
Volume 5, Issues 1-2 (1994), pp. 241–255
Abstract
Neural networks are often characterized as highly nonlinear systems of fairly large amount of parameters (in order of 103 – 104). This fact makes the optimization of parameters to be a nontrivial problem. But the astonishing moment is that the local optimization technique is widely used and yields reliable convergence in many cases. Obviously, the optimization of neural networks is high-dimensional, multi-extremal problem, so, as usual, the global optimization methods would be applied in this case. On the basis of Perceptron-like unit (which is the building block for the most architectures of neural networks) we analyze why the local optimization technique is so successful in the field of neural networks. The result is that a linear approximation of the neural network can be sufficient to evaluate the start point for the local optimization procedure in the nonlinear regime. This result can help in developing faster and more robust algorithms for the optimization of neural network parameters.