Journal:Informatica
Volume 24, Issue 4 (2013), pp. 603–618
Abstract
Image synthesis techniques are present in a wide range of applications as they leverage the amount of information required for creating realistic visualizations. For fast hardware rendering they usually employ a triangle-based representation describing the geometry of the scene. In this paper, we introduce a new and simple framework for performing on-the-fly refinement and simplification of meshes completely on the GPU. As we aim at making easy the integration of level-of-detail management into the creation workflow of artists, the presented method is easy to be implemented. We only need a coarse mesh, its displacement map and a geometry shader. At rendering time, we employ a geometry shader to parallelize the tessellation and displacement steps. The tessellation step performs uniform refinement or simplification operations by applying a fixed subdivision criterion. Our method also exploits coherence by taking advantage of the last computed mesh. We provide a method which offers a flexible integration with standard 3D tools, easy to be implemented, coherence exploitation and wholly processed by the GPU.
Journal:Informatica
Volume 22, Issue 4 (2011), pp. 507–520
Abstract
The most classical visualization methods, including multidimensional scaling and its particular case – Sammon's mapping, encounter difficulties when analyzing large data sets. One of possible ways to solve the problem is the application of artificial neural networks. This paper presents the visualization of large data sets using the feed-forward neural network – SAMANN. This back propagation-like learning rule has been developed to allow a feed-forward artificial neural network to learn Sammon's mapping in an unsupervised way. In its initial form, SAMANN training is computation expensive. In this paper, we discover conditions optimizing the computational expenditure in visualization even of large data sets. It is shown possibility to reduce the original dimensionality of data to a lower one using small number of iterations. The visualization results of real-world data sets are presented.
Journal:Informatica
Volume 22, Issue 1 (2011), pp. 1–10
Abstract
Estimation and modelling problems as they arise in many data analysis areas often turn out to be unstable and/or intractable by standard numerical methods. Such problems frequently occur in fitting of large data sets to a certain model and in predictive learning. Heuristics are general recommendations based on practical statistical evidence, in contrast to a fixed set of rules that cannot vary, although guarantee to give the correct answer. Although the use of these methods became more standard in several fields of sciences, their use for estimation and modelling in statistics appears to be still limited. This paper surveys a set of problem-solving strategies, guided by heuristic information, that are expected to be used more frequently. The use of recent advances in different fields of large-scale data analysis is promoted focusing on applications in medicine, biology and technology.
Journal:Informatica
Volume 20, Issue 2 (2009), pp. 165–172
Abstract
Recent changes in the intersection of the fields of intelligent systems optimization and statistical learning are surveyed. These changes bring new theoretical and computational challenges to the existing research areas racing from web page mining to computer vision, pattern recognition, financial mathematics, bioinformatics and many other ones.
Journal:Informatica
Volume 18, Issue 2 (2007), pp. 187–202
Abstract
In this paper, the relative multidimensional scaling method is investigated. This method is designated to visualize large multidimensional data. The method encompasses application of multidimensional scaling (MDS) to the so-called basic vector set and further mapping of the remaining vectors from the analyzed data set. In the original algorithm of relative MDS, the visualization process is divided into three steps: the set of basis vectors is constructed using the k-means clustering method; this set is projected onto the plane using the MDS algorithm; the set of remaining data is visualized using the relative mapping algorithm. We propose a modification, which differs from the original algorithm in the strategy of selecting the basis vectors. The experimental investigation has shown that the modification exceeds the original algorithm in the visualization quality and computational expenses. The conditions, where the relative MDS efficiency exceeds that of standard MDS, are estimated.
Journal:Informatica
Volume 13, Issue 3 (2002), pp. 275–286
Abstract
In the paper, we analyze the software that realizes the self-organizing maps: SOM-PAK, SOM-TOOLBOX, Viscovery SOMine, Nenet, and two academic systems. Most of the software may be found in the Internet. These are freeware, shareware or demo. The self-organizing maps assist in data clustering and analyzing data similarities. The software differs one from another in the realization and visualization capabilities. The data on coastal dunes and their vegetation in Finland are used for the experimental comparison of the graphical result presentation of the software. Similarities of the systems and their differences, advantages and imperfections are exposed.