Journal:Informatica
Volume 18, Issue 4 (2007), pp. 547–568
Abstract
A modified version of the Bellare and Rogaway (1993) adversarial model is encoded using Asynchronous Product Automata (APA). A model checker tool, Simple Homomorphism Verification Tool (SHVT), is then used to perform state-space analysis on the Automata in the setting of planning problem. The three-party identity-based secret public key protocol (3P-ID-SPK) protocol of Lim and Paterson (2006), which claims to provide explicit key authentication, is used as a case study. We then refute its heuristic security argument by revealing a previously unpublished flaw in the protocol using SHVT. We then show how our approach can automatically repair the protocol. This is, to the best of our knowledge, the first work that integrates an adversarial model from the computational complexity paradigm with an automated tool from the computer security paradigm to analyse protocols in an artificial intelligence problem setting – planning problem – and, more importantly, to repair protocols.
Journal:Informatica
Volume 18, Issue 4 (2007), pp. 535–546
Abstract
In 2004, Abe et al. proposed a threshold signer-ambiguous signature scheme from variety of keys. Their scheme is a generalized case of the ring signature scheme, and it allows the key types to be based on the trapdoor one-way permutations (TOWP) or sigma-protocols including Schnorr's signature scheme. However, the signed message is public for all, which may result in disputes. In this paper, we present a novel threshold signer-ambiguous signature scheme, having the signed message concealed and keeping who the receivers are secret from variety of keys.
Journal:Informatica
Volume 18, Issue 4 (2007), pp. 511–534
Abstract
The advance of the Web has significantly and rapidly changed the way of information organization, sharing and distribution. The next generation of the web, the semantic web, seeks to make information more usable by machines by introducing a more rigorous structure based on ontologies. In this context we try to propose a novel and integrated approach for a semi-automated extraction of ontology-based semantic web from data-intensive web application and thus, make the web content machine-understandable. Our approach is based on the idea that semantics can be extracted by applying a reverse engineering technique on the structures and the instances of HTML-forms which are the most convenient interface to communicate with relational databases on the current data-intensive web application. This semantics is exploited to produce over several steps, a personalised ontology.
Journal:Informatica
Volume 18, Issue 4 (2007), pp. 483–510
Abstract
Semantic-based storage and retrieval of multimedia data requires accurate annotation of the data. Annotation can be done either manually or automatically. The retrieval performance of the manual annotation based approaches is quite good, as compared to approaches based on automatic annotation. However, manual annotation is time consuming and labor extensive. Therefore, it is quite difficult to apply this technique on huge volume of multimedia data. On the other hand, automatic annotation is commonly used to annotate the multimedia data based on low level features, which obviously lacks the semantic nature of the multimedia data. Yet, we have not come across with any such system which automatically annotate the multimedia data based on the extracted semantics accurately. In this paper, we have performed automatic annotation of the images by extracting their semantics (high level features) with the help of semantic libraries. Semantic libraries use semantic graphs. Each graph consists of related concepts along with their relationships. We have also demonstrated with the help of a case study that our proposed approach ensures an improvement in the semantic based retrieval of multimedia data.
Journal:Informatica
Volume 18, Issue 3 (2007), pp. 463–478
Abstract
Fractal image compression is an engaging and worthwhile technology that may be successfully applied to still image coding, especially at high compression ratios. Unfortunately, the large amount of computation needed for the image compression (encoding) stage is a major obstacle that needs to be overcome. In spite of numerous and many-sided attempts to accelerate fractal image compression times, the “speed problem” is far from being carried to its conclusion.
In the paper, a new version (strategy) of the fractal image encoding technique, adapted to process bi-level (black and white) images, is presented. The strategy employs the necessary image similarity condition based on the use of invariant image parameters (image smoothness indices, image coloration ratios, etc.). It is shown that no images can be similar (in the mean squared error sense) if their respective parameter values differ more than somewhat. In the strategy proposed, the necessary image similarity condition plays a key role – it is applied to speed-up the search process for optimal pairings (range block-domain block), i.e., it enables to narrow the domain pool (search region) for each range block. Experimental analysis results show that implementation of the new fractal image encoding strategy accelerates bi-level image compression times considerably. Exceptionally good results (compression times and quality of restored images) are obtained for silhouette images.
Journal:Informatica
Volume 18, Issue 3 (2007), pp. 457–462
Abstract
In this paper we recall the notion of weakly decomposition, we recall some necessary and sufficient conditions for a graph to admit such a decomposition, we introduce the recognition algorithm for the diamond-free graphs which keeps the combinatorial structure of the graph by means of the decomposition, as well as an easy possibility to determine the clique number for the diamond-free graphs.
Journal:Informatica
Volume 18, Issue 3 (2007), pp. 447–456
Abstract
This study uses object-extracting technique to extract – thumb, index, middle, ring, and small fingers from hands. The algorithm developed in this study can find precise locations of fingertips and finger-to-finger-valleys. The extracted fingers contain many useful geometry features. One can use these features to conduct the person identification. Geometry descriptor is used to transfer geometry features of fingers to another feature-domain for image-comparison. Image subtraction is used to exam difference of two fingers. This study uses the fingers as features to recognize different persons.
Journal:Informatica
Volume 18, Issue 3 (2007), pp. 419–446
Abstract
The innovations and improvements in digital imaging sensors and scanners, computer modeling, haptic equipments and e-learning technology, as well as the availability of many powerful graphics PCs and workstations make haptic-based rendering methods for e-learning documentation with 3-D modeling functionality feasible. E-Learning documentation is a new term in computing, engineering and architecture, related to digital documentation with e-learning functionality, and introduced to literature for the first time within this paper. In particular, for the historical living systems (architectures, monuments, cultural heritage sites), such a methodolgy must be able to derive pictorial, geometric, spatial, topological, learning and semantic information from the target architectural object (historical living system), in such a way that it can be directly used for e-learning purposes regarding the history, the architecture, the structure and the temporal (time-based) 3-D geometry of the projected historical living system. A practical project is used to demonstrate the functionality and the performance of the proposed methodology. In particular, the processing steps from image acquisition to e-learning documentation of the Aghios Achilleios basilica, twin lakes Prespes, Northern Greece, through its 3-D geometric CAAD (Computer-Aided Architectural Design) model and semantic description are presented. Also, emphasis is placed on introducing and documenting the new term e-learning documentation. Finaly, for learning purposes related to 3-D modeling accuracy evaluation, a comparison test of two image-based approaches is carried out and discussed.
Journal:Informatica
Volume 18, Issue 3 (2007), pp. 407–418
Abstract
A medical-meteorological weather assessment using hybrid spatial classification of synoptic and meteorological data was done. Empirical models for assessment as well as for forecast of medical-meteorological weather type at the seaside climatic zone in Palanga were developed. It was based on the data of meteofactors (atmospheric pressure, relative humidity, temperature, oxygen density in atmosphere, cyclone fronts, etc.) as well as on the occurrence of meteotropical reactions of cardiovascular function collected during 8-year period. The empirical models allow objectively assess and forecast 3 types of medical-meteorological weather types: favourable, unfavourable and very unfavourable weather. Classification model assessed favourable weather type in 56.1%, unfavourable in 31.7% and very unfavourable in 12.2%, while forecast was of favourable weather type in 52.4%, unfavourable in 46% and very unfavourable in 1.6% of days. Developed model enables more precise weather estimation and forecast meteotropical reactions promoting development of preventive measures of cardiovascular complications for reduction of negative weather impact on health in coronary artery diseases patients.
Journal:Informatica
Volume 18, Issue 3 (2007), pp. 395–406
Abstract
This paper describes a framework for making up a set of syllables and phonemes that subsequently is used in the creation of acoustic models for continuous speech recognition of Lithuanian. The target is to discover a set of syllables and phonemes that is of utmost importance in speech recognition. This framework includes operations with lexicon, and transcriptions of records. To facilitate this work, additional programs have been developed that perform word syllabification, lexicon adjustment, etc. Series of experiments were done in order to establish the framework and model syllable- and phoneme-based speech recognition. Dominance of a syllable in lexicon has improved speech recognition results and encouraged us to move away from a strict definition of syllable, i.e., a syllable becomes a simple sub-word unit derived from a syllable. Two sets of syllables and phonemes and two types of lexicons have been developed and tested. The best recognition accuracy achieved 56.67% ±0.33. The speech recognition system is based on Hidden Markov Models (HMM). The continuous speech corpus LRN0 was used for the speech recognition experiments.