Journal:Informatica
Volume 17, Issue 4 (2006), pp. 535–550
Abstract
Matrix transpose in parallel systems typically involves costly all-to-all communications. In this paper, we provide a comparative characterization of various efficient algorithms for transposing small and large matrices using the popular symmetric multiprocessors (SMP) architecture, which carries a relatively low communication cost due to its large aggregate bandwidth and low-latency inter-process communication. We conduct analysis on the cost of data sending / receiving and the memory requirement of these matrix-transpose algorithms. We then propose an adaptive algorithm that can minimize the overhead of the matrix transpose operations given the parameters such as the data size, number of processors, start-up time, and the effective communication bandwidth.
Journal:Informatica
Volume 17, Issue 4 (2006), pp. 519–534
Abstract
This paper proposes a threshold key escrow scheme from pairing. It tolerates the passive adversary to access any internal data of corrupted key escrow agents and the active adversary that can make corrupted servers to deviate from the protocol. The scheme is secure against threshold adaptive chosen-ciphertext attack. The formal proof of security is presented in the random oracle model, assuming the decision Bilinear Diffie-Hellman problem is computationally hard.
Journal:Informatica
Volume 17, Issue 4 (2006), pp. 503–518
Abstract
The quality of software engineering projects often suffers due to the large gap between the way stakeholders present their requirements and the way analysts capture and express those requirements. With this problem in mind the new method for business rules driven IS requirements specification has been developed. In this paper the architecture of the requirements repository, which is at the core of the proposed method, is presented. The repository model supports the storage and management of all components of the captured requirements, including functions, business decisions, data sources, conceptual data model elements, business rules and their templates. The important aspects of the specialised requirements specification tool implementation are also overviewed.
Journal:Informatica
Volume 17, Issue 4 (2006), pp. 481–502
Abstract
In this paper optimization of DSR is achieved using New Link Cache structure and Source Transparent Route Maintenance Method. The new link cache effectively utilizes the memory by caching the routes in adjacent list type of data structures. It selects the shortest hop and least congested path, which in turn reduce the control packets, route request packets, route reply packets and increase the data packets forwarded by the nodes. To solve the DSR route maintenance problem during high mobility, source transparent route maintenance method is introduced in this paper. This method has two schemes namely cache validation and local route repair. These schemes reduce the packet loss, end-to-end delay and increase the throughput.
Journal:Informatica
Volume 17, Issue 4 (2006), pp. 467–480
Abstract
We revisit the password-based group key exchange protocol due to Lee et al. (2004), which carries a claimed proof of security in the Bresson et al. model under the intractability of the Decisional Diffie–Hellman problem (DDH) and Computational Diffie–Hellman (CDH) problem. We reveal a previously unpublished flaw in the protocol and its proof, whereby we demonstrate that the protocol violates the definition of security in the model. To provide a better insight into the protocol and proof failures, we present a fixed protocol. We hope our analysis will enable similar mistakes to be avoided in the future. We also revisit protocol 4 of Song and Kim (2000), and reveal a previously unpublished flaw in the protocol (i.e., a reflection attack).
Journal:Informatica
Volume 17, Issue 3 (2006), pp. 445–462
Abstract
We know the necessity for information security becomes more widespread in these days, especially for hardware-based implementations such as smart cards chips for wireless applications and cryptographic accelerators. Fast modular exponentiation algorithms are often considered of practical significance in public-key cryptosystems. The RSA cryptosystem is one of the most widely used technologies for achieving information security. The main task of the encryption and decryption engine of RSA cryptosystem is to compute ME mod N. Because the bit-length of the numbers M, E, and N would be about 512 to 1024 bits now, the computations for RSA cryptosystem are time-consuming. In this paper, an efficient technique for parallel computation of the modular exponentiation is proposed and our algorithm can reduce time complexity. We can have the speedup ratio as 1.06 or even 2.75 if the proposed technique is used. In Savas–Tenca–Koc algorithm, they design a multiplier with an insignificant increase in chip area (about 2.8%) and no increase in time delay. Our proposed technique is faster than Savas–Tenca–Koc algorithm in time complexity and improves efficiency for RSA cryptosystem.
Journal:Informatica
Volume 17, Issue 3 (2006), pp. 427–444
Abstract
Most of the Takagi–Sugeno Fuzzy (TSF) systems found in the literature have only used linear functions of input variables as rule consequent and can be called as TSF Models with Fixed Coefficient (TSFMFC). This paper presents TSF model with variable coefficient (TSFMVC) which can more closely approximate a class of nonlinear systems, nonlinear dynamic systems, and nonlinear control systems. It is also shown that TSFMFC is a special case of TSFMVC. Moreover Variable Gain TSF Controller (VGTSFC) is defined and it performs better, as shown by the simulation results, when compared with Fixed Gain TSF Controller (FGTSFC).
Journal:Informatica
Volume 17, Issue 3 (2006), pp. 407–426
Abstract
Several best effort schemes (next-hop routing) are used to transport the data in the Internet. Some of them do not perform flexible route computations to cope up with the network dynamics. With the recent trends in programmable networks, mobile agent technology seems to support more flexible, adaptable and distributed mechanism for routing. In this paper, we propose a Mobile Agent based Routing (MAR) scheme with the objectives similar to Routing Information Protocol (RIP). A comparative study of both the schemes (MAR and RIP) in terms of communication overheads, convergence time, network bandwidth utilization and average session delays is presented. The results demonstrate that the MAR scheme performs better than RIP. MAR has comparatively less communication overheads and convergence time and also offers more flexibility and adaptability as compared to RIP. In addition, this paper also presents a MAR based network load balancing.
Journal:Informatica
Volume 17, Issue 3 (2006), pp. 393–406
Abstract
In this work labeling of planar graphs is taken up which involves labeling the p vertices, the q edges and the f internal faces such that the weights of the faces form an arithmetic progression with common difference d. If d=0, then the planar graph is said to have an Inner Magic labeling; and if d≠0, then it is Inner Antimagic labeling. Some new kinds of graphs have been developed which have been derived from Wheels by adding vertices in a certain way and it is proposed to give new names to these graphs namely Flower-1 and Flower-2. This paper presents the algorithms to obtain the Inner Magic and Inner Antimagic labeling for Wheels and the Inner Antimagic labeling for Flower-1 and Flower-2. The results thus found show much regularity in the labelings obtained.
Journal:Informatica
Volume 17, Issue 3 (2006), pp. 381–392
Abstract
This paper discusses the determination of the spare inventory level for a multiechelon repairable item inventory system, which has several bases and a central depot with emergency lateral transshipment capability. Previous research is extended by removing a restrictive assumption on the repair time distribution. A mathematical model that allows a general repair time distribution, as well as an algorithm to find a solution of the model, is developed. Thus, the main focus of this study is to improve the accuracy of previous models and to estimate the gain in accuracy from use of the current methodology. Computational experiments are performed to estimate the accuracy improvement and to determine the managerial implications of the results.