Pub. online:28 Oct 2025Type:Research ArticleOpen Access
Journal:Informatica
Volume 37, Issue 1 (2026), pp. 1–24
Abstract
Traditional loss functions such as mean squared error (MSE) are widely employed, but they often struggle to capture the dynamic characteristics of high-dimensional nonlinear systems. To address this issue, we propose an improved loss function that integrates linear multistep methods, system-consistency constraints, and prediction-phase error control. This construction simultaneously improves training accuracy and long-term stability. Furthermore, the introduction of recursive loss and interpolation strategies brings the model closer to practical prediction scenarios, broadening its applicability. Numerical simulations demonstrate that this construction significantly outperforms both mean square error and existing custom loss functions in terms of performance.
Journal:Informatica
Volume 15, Issue 4 (2004), pp. 551–564
Abstract
Text categorization – the assignment of natural language documents to one or more predefined categories based on their semantic content – is an important component in many information organization and management tasks. Performance of neural networks learning is known to be sensitive to the initial weights and architecture. This paper discusses the use multilayer neural network initialization with decision tree classifier for improving text categorization accuracy. Decision tree from root node until a final leave is used for initialization of each single unit. Growing decision trees with increasingly larger amounts of training data will result in larger decision tree sizes. As a result, the neural networks constructed from these decision trees are often larger and more complex than necessary. Appropriate choice of certainty factor is able to produce trees that are essentially constant in size in the face of increasingly larger training sets. Experimental results support the conclusion that error based pruning can be used to produce appropriately sized trees, which are directly mapped to optimal neural network architecture with good accuracy. The experimental evaluation demonstrates this approach provides better classification accuracy with Reuters‐21578 corpus, one of the standard benchmarks for text categorization tasks. We present results comparing the accuracy of this approach with multilayer neural network initialized with traditional random method and decision tree classifiers.
Journal:Informatica
Volume 14, Issue 1 (2003), pp. 95–110
Abstract
It is a complex non‐linear problem to predict mechanical properties of concrete. As a new approach, the artificial neural networks can extract rules from data, but have difficulties with convergence by the traditional algorithms. The authors defined a new convex function of the grand total error and deduced a global optimization back‐propagation algorithm (GOBPA), which can solve the local minimum problem. For weights' adjustment and errors' computation of the neurons in various layers, a set of formulae are obtained by optimizing the grand total error function over a simple output space instead of a complicated weight space. Concrete strength simulated by neural networks accords with the data of the experiments on concrete, which demonstrates that this method is applicable to concrete properties' prediction meeting the required precision. Computation results show that GOBPA performs better than a linear regression analysis.
Journal:Informatica
Volume 9, Issue 2 (1998), pp. 141–160
Abstract
The nonlinearities play a crucial role in the brain processes. They take place in neuronal system elements: synapses, dendrite membranes, soma of neurons, axons. It is established that the soma nonlinearity, which is of sigmoidal shape, is not so strong as compared with the electric current-voltage relation of a dendrite membrane. The relation is N-shaped with two stable and one unstable points. In dynamics, this leads to the appearance of a switch wave or formation of some logic functions. We present some artificial logic circuits based on an electrical analogy of dendritic membrane characteristics in static and dynamic cases. The nonlinear cable theory and the numerical simulation were used. Basing on the logic circuit construction proposed, we suppose that the dendritic membrane processes are able not only to gather and transfer information but also to transform and classify knowledge.
The theoretical substantiation and numerical experiments are only the first step forward to the proving of neuronal dendritic logic constructions. Of course, extensive neurophysiological tests are necessary to discover the final mechanism of neuronal computing in the human brain.
Journal:Informatica
Volume 5, Issues 1-2 (1994), pp. 241–255
Abstract
Neural networks are often characterized as highly nonlinear systems of fairly large amount of parameters (in order of 103 – 104). This fact makes the optimization of parameters to be a nontrivial problem. But the astonishing moment is that the local optimization technique is widely used and yields reliable convergence in many cases. Obviously, the optimization of neural networks is high-dimensional, multi-extremal problem, so, as usual, the global optimization methods would be applied in this case. On the basis of Perceptron-like unit (which is the building block for the most architectures of neural networks) we analyze why the local optimization technique is so successful in the field of neural networks. The result is that a linear approximation of the neural network can be sufficient to evaluate the start point for the local optimization procedure in the nonlinear regime. This result can help in developing faster and more robust algorithms for the optimization of neural network parameters.