Journal:Informatica
Volume 18, Issue 3 (2007), pp. 419–446
Abstract
The innovations and improvements in digital imaging sensors and scanners, computer modeling, haptic equipments and e-learning technology, as well as the availability of many powerful graphics PCs and workstations make haptic-based rendering methods for e-learning documentation with 3-D modeling functionality feasible. E-Learning documentation is a new term in computing, engineering and architecture, related to digital documentation with e-learning functionality, and introduced to literature for the first time within this paper. In particular, for the historical living systems (architectures, monuments, cultural heritage sites), such a methodolgy must be able to derive pictorial, geometric, spatial, topological, learning and semantic information from the target architectural object (historical living system), in such a way that it can be directly used for e-learning purposes regarding the history, the architecture, the structure and the temporal (time-based) 3-D geometry of the projected historical living system. A practical project is used to demonstrate the functionality and the performance of the proposed methodology. In particular, the processing steps from image acquisition to e-learning documentation of the Aghios Achilleios basilica, twin lakes Prespes, Northern Greece, through its 3-D geometric CAAD (Computer-Aided Architectural Design) model and semantic description are presented. Also, emphasis is placed on introducing and documenting the new term e-learning documentation. Finaly, for learning purposes related to 3-D modeling accuracy evaluation, a comparison test of two image-based approaches is carried out and discussed.
Journal:Informatica
Volume 18, Issue 3 (2007), pp. 407–418
Abstract
A medical-meteorological weather assessment using hybrid spatial classification of synoptic and meteorological data was done. Empirical models for assessment as well as for forecast of medical-meteorological weather type at the seaside climatic zone in Palanga were developed. It was based on the data of meteofactors (atmospheric pressure, relative humidity, temperature, oxygen density in atmosphere, cyclone fronts, etc.) as well as on the occurrence of meteotropical reactions of cardiovascular function collected during 8-year period. The empirical models allow objectively assess and forecast 3 types of medical-meteorological weather types: favourable, unfavourable and very unfavourable weather. Classification model assessed favourable weather type in 56.1%, unfavourable in 31.7% and very unfavourable in 12.2%, while forecast was of favourable weather type in 52.4%, unfavourable in 46% and very unfavourable in 1.6% of days. Developed model enables more precise weather estimation and forecast meteotropical reactions promoting development of preventive measures of cardiovascular complications for reduction of negative weather impact on health in coronary artery diseases patients.
Journal:Informatica
Volume 18, Issue 3 (2007), pp. 395–406
Abstract
This paper describes a framework for making up a set of syllables and phonemes that subsequently is used in the creation of acoustic models for continuous speech recognition of Lithuanian. The target is to discover a set of syllables and phonemes that is of utmost importance in speech recognition. This framework includes operations with lexicon, and transcriptions of records. To facilitate this work, additional programs have been developed that perform word syllabification, lexicon adjustment, etc. Series of experiments were done in order to establish the framework and model syllable- and phoneme-based speech recognition. Dominance of a syllable in lexicon has improved speech recognition results and encouraged us to move away from a strict definition of syllable, i.e., a syllable becomes a simple sub-word unit derived from a syllable. Two sets of syllables and phonemes and two types of lexicons have been developed and tested. The best recognition accuracy achieved 56.67% ±0.33. The speech recognition system is based on Hidden Markov Models (HMM). The continuous speech corpus LRN0 was used for the speech recognition experiments.
Journal:Informatica
Volume 18, Issue 3 (2007), pp. 375–394
Abstract
The notion of concurrent signatures was introduced by Chen, Kudla and Paterson in their seminal paper in Eurocrypt 2004. In concurrent signature schemes, two entities can produce two signatures that are not binding, until an extra piece of information (namely the keystone) is released by one of the parties. Upon release of the keystone, both signatures become binding to their true signers concurrently. In ICICS 2005, two identity-based perfect concurrent signature schemes were proposed by Chow and Susilo. In this paper, we show that these two schemes are unfair. In which the initial signer can cheat the matching signer. We present a formal definition of ID-based concurrent signatures which redress the flaw of Chow et al.'s definition and then propose two simple but significant improvements to fix our attacks.
Journal:Informatica
Volume 18, Issue 3 (2007), pp. 363–374
Abstract
Internationalization of compilers and localization of programming languages is not a usual phenomenon yet; however, due to a rapid progress of software and programming technologies it is inevitable. The new versions of wide used programming systems already allow using the identifiers written in the native language, and partially supports Unicode standard, but still have many internationalization deficiencies.
The paper analyses the main elements of internationalization of compilers and their localization possibilities. According to contemporary standards, existing practices of software internationalization and tendencies there are given recommendations how compilers should be internationalized. The paper gives arguments of the importance of localization of lexical elements of the programming languages, and presents solutions that enable to solve the problems of portability of programs developed using localized compiler as well as problems of compiler's compatibility with other compilers.
Journal:Informatica
Volume 18, Issue 3 (2007), pp. 343–362
Abstract
One of the tasks of data mining is classification, which provides a mapping from attributes (observations) to pre-specified classes. Classification models are built by using underlying data. In principle, the models built with more data yield better results. However, the relationship between the available data and the performance is not well understood, except that the accuracy of a classification model has diminishing improvements as a function of data size. In this paper, we present an approach for an early assessment of the extracted knowledge (classification models) in the terms of performance (accuracy), based on the amount of data used. The assessment is based on the observation of the performance on smaller sample sizes. The solution is formally defined and used in an experiment. In experiments we show the correctness and utility of the approach.
Journal:Informatica
Volume 18, Issue 3 (2007), pp. 325–342
Abstract
This paper is concerned with an employee scheduling problem involving multiple shifts and work centers, where employees belong to a hierarchy of categories having downward substitutability. An employee at a higher category may perform the duties of an employee at a lower category, but not vice versa. However, a higher category employee receives a higher compensation than a lower category employee. For a given work center, the demand for each category during a given shift is fixed for the weekdays, and may differ from that on weekends. Two objectives need to be achieved: The first is to find a minimum-cost workforce mix of categories of employees that is needed to satisfy specified demand requirements, and the second is to assign the selected employees to shifts and work centers taking into consideration their preferences for shifts, work centers, and off-days. A mixed-integer programming model is initially developed for the problem, based on which a specialized scheduling heuristic is subsequently developed for the problem. Computational results reported reveal that the proposed heuristic determines solutions proven to lie within 92–99% of optimality for a number of realistic test problems.
Journal:Informatica
Volume 18, Issue 2 (2007), pp. 305–320
Abstract
This paper considers Lur'e type descriptor systems (LDS). The concept of strongly absolute stability is defined for LDS and such a notion is a generalization of absolute stability for Lur'e type standard state-space systems (LSS). A reduced-order LSS is obtained by a standard coordinate transformation and it is shown that the strongly absolute stability of the LDS is equivalent to the absolute stability of the reduced-order LSS. By a generalized Lyapunov function, we derive an LMIs based strongly absolute stability criterion. Furthermore, we present the frequency-domain interpretation of the criterion, which shows that the criterion is a generalization of the classical circle criterion. Finally, numerical examples are given to illustrate the effectiveness of the obtained results.
Journal:Informatica
Volume 18, Issue 2 (2007), pp. 289–304
Abstract
Iterative abstraction refinement has emerged in the last few years as the leading approach to software model checking. We present an approach for automatically verifying C programs against safety specifications based on finite state machine. The approach eliminates unneeded variables using program slicing technique, and then automatically extracts an initial abstract model from C source code using predicate abstraction and theorem proving. In order to reduce time complexities, we partition the set of candidate predicates into subsets, and construct abstract model independently. On the basis of a counterexample-guided abstraction refinement scheme, the abstraction refines incrementally until the specification is either satisfied or refuted. Our methods can be extended to verifying concurrency C programs by parallel composition.
Journal:Informatica
Volume 18, Issue 2 (2007), pp. 279–288
Abstract
In this paper an exact and complete analysis of the Lloyd–Max's algorithm and its initialization is carried out. An effective method for initialization of Lloyd–Max's algorithm of optimal scalar quantization for Laplacian source is proposed. The proposed method is very simple method of making an intelligent guess of the starting points for the iterative Lloyd–Max's algorithm. Namely, the initial values for the iterative Lloyd–Max's algorithm can be determined by the values of compandor's parameters. It is demonstrated that by following that logic the proposed method provides a rapid convergence of the Lloyd–Max's algorithm.