Journal:Informatica
Volume 24, Issue 3 (2013), pp. 395–411
Abstract
In this paper, the nonlinear neural network FitzHugh–Nagumo model with an expansion by the excited neuronal kernel function has been investigated. The mean field approximation of neuronal potentials and recovery currents inside neuron ensembles was used. The biologically more realistic nonlinear sodium ionic current–voltage characteristic and kernel functions were applied. A possibility to present the nonlinear integral differential equations with kernel functions under the Fourier transformation by partial differential equations allows us to overcome the analytical and numerical modeling difficulties. An equivalence of two kinds solutions was confirmed basing on the errors analysis. The approach of the equivalent partial differential equations was successfully employed to solve the system with the heterogeneous synaptic functions as well as the FitzHugh–Nagumo nonlinear time-delayed differential equations in the case of the Hopf bifurcation and stability of stationary states. The analytical studies are corroborated by many numerical modeling experiments.
The digital simulation at the transient and steady-state conditions was carried out by using finite difference technique. The comparison of the simulation results revealed that some of the calculated parameters, i.e. response and sensitivity is the same, while the others, i.e. half-time of the steady-state is significantly different for distinct models.
Journal:Informatica
Volume 22, Issue 4 (2011), pp. 507–520
Abstract
The most classical visualization methods, including multidimensional scaling and its particular case – Sammon's mapping, encounter difficulties when analyzing large data sets. One of possible ways to solve the problem is the application of artificial neural networks. This paper presents the visualization of large data sets using the feed-forward neural network – SAMANN. This back propagation-like learning rule has been developed to allow a feed-forward artificial neural network to learn Sammon's mapping in an unsupervised way. In its initial form, SAMANN training is computation expensive. In this paper, we discover conditions optimizing the computational expenditure in visualization even of large data sets. It is shown possibility to reduce the original dimensionality of data to a lower one using small number of iterations. The visualization results of real-world data sets are presented.
Journal:Informatica
Volume 21, Issue 3 (2010), pp. 339–348
Abstract
In the presented paper, some issues of the fundamental classical mechanics theory in the sense of Ising physics are introduced into the applied neural network area. The expansion of the neural networks theory is based primarily on introducing Hebb postulate into the mean field theory as an instrument of analysis of complex systems. Appropriate propositions and a theorem with proofs were proposed. In addition, some computational background is presented and discussed.
Journal:Informatica
Volume 20, Issue 4 (2009), pp. 477–486
Abstract
In the present paper, the neural networks theory based on presumptions of the Ising model is considered. Indirect couplings, the Dirac distributions and the corrected Hebb rule are introduced and analyzed. The embedded patterns memorized in a neural network and the indirect couplings are considered as random. Apart from the complex theory based on Dirac distributions the simplified stationary mean field equations and their solutions taking into account an ergodicity of the average overlap and the indirect order parameter are presented. The modeling results are demonstrated to corroborate theoretical statements and applied aspects.
Journal:Informatica
Volume 12, Issue 1 (2001), pp. 101–108
Abstract
This paper considers some aspects of using a cascade-correlation network in the investment task in which it is required to determine the most suitable project to invest money. This task is one of the most often met economical tasks. In various bibliographical sources on economics there are described different methods of choosing investment projects. However, they all use either one or a few criteria, i.e., out of the set of criteria there are chosen most valuable ones. With this, a lot of information contained in other choice criteria is omitted. A neural network enables one to avoid information losses. It accumulates information and helps to gain better results when choosing an investment project in comparison with classical methods. The cascade-correlation network architecture that is used in this paper has been developed by Scott E. Fahlman and Cristian Lebiere at Carnegie Mellon University.