Pub. online:1 Jan 2017Type:Research ArticleOpen Access
Journal:Informatica
Volume 28, Issue 2 (2017), pp. 359–374
Abstract
In recent years, the growth of marine traffic in ports and their surroundings raise the traffic and security control problems and increase the workload for traffic control operators. The automated identification system of vessel movement generates huge amounts of data that need to be analysed to make the proper decision. Thus, rapid self-learning algorithms for the decision support system have to be developed to detect the abnormal vessel movement in intense marine traffic areas. The paper presents a new self-learning adaptive classification algorithm based on the combination of a self-organizing map (SOM) and a virtual pheromone for abnormal vessel movement detection in maritime traffic. To improve the quality of classification results, Mexican hat neighbourhood function has been used as a SOM neighbourhood function. To estimate the classification results of the proposed algorithm, an experimental investigation has been performed using the real data set, provided by the Klaipėda seaport and that obtained from the automated identification system. The results of the research show that the proposed algorithm provides rapid self-learning characteristics and classification.
Journal:Informatica
Volume 24, Issue 3 (2013), pp. 395–411
Abstract
In this paper, the nonlinear neural network FitzHugh–Nagumo model with an expansion by the excited neuronal kernel function has been investigated. The mean field approximation of neuronal potentials and recovery currents inside neuron ensembles was used. The biologically more realistic nonlinear sodium ionic current–voltage characteristic and kernel functions were applied. A possibility to present the nonlinear integral differential equations with kernel functions under the Fourier transformation by partial differential equations allows us to overcome the analytical and numerical modeling difficulties. An equivalence of two kinds solutions was confirmed basing on the errors analysis. The approach of the equivalent partial differential equations was successfully employed to solve the system with the heterogeneous synaptic functions as well as the FitzHugh–Nagumo nonlinear time-delayed differential equations in the case of the Hopf bifurcation and stability of stationary states. The analytical studies are corroborated by many numerical modeling experiments.
The digital simulation at the transient and steady-state conditions was carried out by using finite difference technique. The comparison of the simulation results revealed that some of the calculated parameters, i.e. response and sensitivity is the same, while the others, i.e. half-time of the steady-state is significantly different for distinct models.
Journal:Informatica
Volume 22, Issue 4 (2011), pp. 507–520
Abstract
The most classical visualization methods, including multidimensional scaling and its particular case – Sammon's mapping, encounter difficulties when analyzing large data sets. One of possible ways to solve the problem is the application of artificial neural networks. This paper presents the visualization of large data sets using the feed-forward neural network – SAMANN. This back propagation-like learning rule has been developed to allow a feed-forward artificial neural network to learn Sammon's mapping in an unsupervised way. In its initial form, SAMANN training is computation expensive. In this paper, we discover conditions optimizing the computational expenditure in visualization even of large data sets. It is shown possibility to reduce the original dimensionality of data to a lower one using small number of iterations. The visualization results of real-world data sets are presented.
Journal:Informatica
Volume 21, Issue 3 (2010), pp. 339–348
Abstract
In the presented paper, some issues of the fundamental classical mechanics theory in the sense of Ising physics are introduced into the applied neural network area. The expansion of the neural networks theory is based primarily on introducing Hebb postulate into the mean field theory as an instrument of analysis of complex systems. Appropriate propositions and a theorem with proofs were proposed. In addition, some computational background is presented and discussed.
Journal:Informatica
Volume 20, Issue 4 (2009), pp. 477–486
Abstract
In the present paper, the neural networks theory based on presumptions of the Ising model is considered. Indirect couplings, the Dirac distributions and the corrected Hebb rule are introduced and analyzed. The embedded patterns memorized in a neural network and the indirect couplings are considered as random. Apart from the complex theory based on Dirac distributions the simplified stationary mean field equations and their solutions taking into account an ergodicity of the average overlap and the indirect order parameter are presented. The modeling results are demonstrated to corroborate theoretical statements and applied aspects.
Journal:Informatica
Volume 18, Issue 2 (2007), pp. 163–186
Abstract
In this article we present the general architecture of a hybrid neuro-symbolic system for the selection and stepwise elimination of predictor variables and non-relevant individuals for the construction of a model. Our purpose is to design tools for extracting the relevant variables and the relevant individuals for an automatic training from data. The objective is to reduce the complexity of storage, therefore the complexity of calculation, and to gradually improve the performance of ordering, that is to say to arrive at a good quality training.
Journal:Informatica
Volume 12, Issue 2 (2001), pp. 239–262
Abstract
The paper deals with the analysis of Research and Technology Development (RTD) in the Central European countries and the relation of RTD with economic and social parameters of countries in this region. A methodology has been developed for quantitative and qualitative ranking and estimates of relationship among multidimensional objects on the base of such analysis. The knowledge has been discovered in four databases: two databases of European Commission (EC) containing data on the RTD activities, databases of USA CIA and The World bank containing economic and social data. Data mining has been performed by means of visual cluster analysis (using the non-linear Sammon's mapping and Kohonen's artificial neural network – the self-organising map), regression analysis and non-linear ranking (using graphs of domination). The results on clustering of the Central European countries and on the relations among RTD parameters with economic and social parameters are obtained. In addition, the data served for testing various features of realisation of the self-organising map. The integration of non-classical methods (the self-organising map and graphs of domination) with classical ones (regress analysis and Sammon' mapping) increases the capacity of visual analysis and allows making more complete conclusions.
Journal:Informatica
Volume 12, Issue 1 (2001), pp. 101–108
Abstract
This paper considers some aspects of using a cascade-correlation network in the investment task in which it is required to determine the most suitable project to invest money. This task is one of the most often met economical tasks. In various bibliographical sources on economics there are described different methods of choosing investment projects. However, they all use either one or a few criteria, i.e., out of the set of criteria there are chosen most valuable ones. With this, a lot of information contained in other choice criteria is omitted. A neural network enables one to avoid information losses. It accumulates information and helps to gain better results when choosing an investment project in comparison with classical methods. The cascade-correlation network architecture that is used in this paper has been developed by Scott E. Fahlman and Cristian Lebiere at Carnegie Mellon University.
Journal:Informatica
Volume 5, Issues 1-2 (1994), pp. 123–166
Abstract
We consider here the average deviation as the most important objective when designing numerical techniques and algorithms. We call that a Bayesian approach.
We start by describing the Bayesian approach to the continuous global optimization. Then we show how to apply the results to the adaptation of parameters of randomized techniques of optimization. We assume that there exists a simple function which roughly predicts the consequences of decisions. We call it heuristics. We define the probability of a decision by a randomized decision function depending on heuristics. We fix this decision function, except for some parameters that we call the decision parameters.
We repeat the randomized decision procedure several times given the decision parameters and regard the best outcome as a result. We optimize the decision parameters to make the search more efficient. Thus we replace the original optimization problem by an auxiliary problem of continuous stochastic optimization. We solve the auxiliary problem by the Bayesian methods of global optimization. Therefore we call the approach as the Bayesian one.
We discuss the advantages and disadvantages of the Bayesian approach. We describe the applications to some of discrete programming problems, such as optimization of mixed Boolean bilinear functions including the scheduling of batch operations and the optimization of neural networks.
Journal:Informatica
Volume 2, Issue 2 (1991), pp. 221–232
Abstract
The principles of a neural network environmental model are proposed. The principles are universal and can use different neural network architectures. Such a model is self-organizing, it can operate in both regimes with and without a teacher. It codes information about objects, their features, the actions operating in an environment, analyzes concrete situations. There are functions for making an action plan, for action control. The goal of the model is given from an external site. The model has more than sixteen active regimes. The neural network environmental model is fulfilled in software and hardware tools.