Pub. online:1 Jan 2019Type:Research ArticleOpen Access
Journal:Informatica
Volume 30, Issue 4 (2019), pp. 749–780
Abstract
Despite the mass of empirical data in neuroscience and plenty of interdisciplinary approaches in cognitive science, there are relatively few applicable theories of how the brain as a coherent system functions in terms of energy and entropy processes. Recently, a free energy principle has been portrayed as a possible way towards a unified brain theory. However, its capacity, using free energy and entropy, to unify different perspectives on brain function dynamics is yet to be established. This multidisciplinary study attempts to make sense of the free energy and entropy not only from the perspective of Helmholtz thermodynamic basic principles but also from the information theory framework. Based on the proposed conceptual framework, we constructed (i) four basic brain states (deep sleep, resting, active wakeful and thinking) as dynamic entropy and free energy processes and (ii) stylized a self-organizing mechanism of transitions between the basic brain states during a day period. Adaptive transitions between brain states represent homeostatic rhythms, which produce complex daily brain states dynamics. As a result, the proposed simulation model produces different self-organized circadian dynamics of brain states for different types of chronotypes, which corresponds with the empirical observations.
Pub. online:1 Jan 2018Type:Research ArticleOpen Access
Journal:Informatica
Volume 29, Issue 4 (2018), pp. 693–710
Abstract
In this paper, we propose a framework for extracting translation memory from a corpus of fiction and non-fiction books. In recent years, there have been several proposals to align bilingual corpus and extract translation memory from legal and technical documents. Yet, when it comes to an alignment of the corpus of translated fiction and non-fiction books, the existing alignment algorithms give low precision results. In order to solve this low precision problem, we propose a new method that incorporates existing alignment algorithms with proactive learning approach. We define several feature functions that are used to build two classifiers for text filtering and alignment. We report results on English-Lithuanian language pair and on bilingual corpus from 200 books. We demonstrate a significant improvement in alignment accuracy over currently available alignment systems.
Journal:Informatica
Volume 22, Issue 2 (2011), pp. 203–224
Abstract
In this paper, we describe a model for aligning books and documents from bilingual corpus with a goal to create “perfectly” aligned bilingual corpus on word-to-word level. Presented algorithms differ from existing algorithms in consideration of the presence of human translator which usage we are trying to minimize. We treat human translator as an oracle who knows exact alignments and the goal of the system is to optimize (minimize) the use of this oracle. The effectiveness of the oracle is measured by the speed at which he can create “perfectly” aligned bilingual corpus. By “Perfectly” aligned corpus we mean zero entropy corpus because oracle can make alignments without any probabilistic interpretation, i.e., with 100% confidence. Sentence level alignments and word-to-word alignments, although treated separately in this paper, are integrated in a single framework. For sentence level alignments we provide a dynamic programming algorithm which achieves low precision and recall error rate. For word-to-word level alignments Expectation Maximization algorithm that integrates linguistic dictionaries is suggested as the main tool for the oracle to build “perfectly” aligned bilingual corpus. We show empirically that suggested pre-aligned corpus requires little interaction from the oracle and that creation of perfectly aligned corpus can be achieved almost with the speed of human reading. Presented algorithms are language independent but in this paper we verify them with English–Lithuanian language pair on two types of text: law documents and fiction literature.
Journal:Informatica
Volume 13, Issue 4 (2002), pp. 465–484
Abstract
The presented article is about a research using artificial neural network (ANN) methods for compound (technical and fundamental) analysis and prognosis of Lithuania's National Stock Exchange (LNSE) indices LITIN, LITIN-A and LITIN-VVP. We employed initial pre-processing (analysis for entropy and correlation) for filtering out model input variables (LNSE indices, macroeconomic indicators, Stock Exchange indices of other countries such as the USA – Dow Jones and S&P, EU – Eurex, Russia – RTS). Investigations for the best approximation and forecasting capabilities were performed using different backpropagation ANN learning algorithms, configurations, iteration numbers, data form-factors, etc. A wide spectrum of different results has shown a high sensitivity to ANN parameters. ANN autoregressive, autoregressive causative and causative trend model performances were compared in the approximation and forecasting by a linear discriminant analysis.