Journal:Informatica
Volume 25, Issue 1 (2014), pp. 113–137
Abstract
This paper presents an adaptive image-watermarking technique based on just-noticeable distortion (JND) profile and fuzzy inference system (FIS) optimized with genetic algorithm (GA). Here it is referred to as the AIWJFG technique. During watermark embedding, it embeds a watermark into an image by referring the JND profile of the image so as to make the watermark more imperceptible. It employs image features and local statistics in the construction of an FIS, and then exploits the FIS to extract watermarks without original images. In addition, the FIS can be further optimized by a GA to improve its watermark-extraction performance remarkably. Experimental results demonstrate that the AIWJFG technique not only makes the embedded watermarks further imperceptible but also possesses adaptive and robust capabilities to resist on image-manipulation attacks being considered in the paper.
Journal:Informatica
Volume 15, Issue 4 (2004), pp. 551–564
Abstract
Text categorization – the assignment of natural language documents to one or more predefined categories based on their semantic content – is an important component in many information organization and management tasks. Performance of neural networks learning is known to be sensitive to the initial weights and architecture. This paper discusses the use multilayer neural network initialization with decision tree classifier for improving text categorization accuracy. Decision tree from root node until a final leave is used for initialization of each single unit. Growing decision trees with increasingly larger amounts of training data will result in larger decision tree sizes. As a result, the neural networks constructed from these decision trees are often larger and more complex than necessary. Appropriate choice of certainty factor is able to produce trees that are essentially constant in size in the face of increasingly larger training sets. Experimental results support the conclusion that error based pruning can be used to produce appropriately sized trees, which are directly mapped to optimal neural network architecture with good accuracy. The experimental evaluation demonstrates this approach provides better classification accuracy with Reuters‐21578 corpus, one of the standard benchmarks for text categorization tasks. We present results comparing the accuracy of this approach with multilayer neural network initialized with traditional random method and decision tree classifiers.
Journal:Informatica
Volume 13, Issue 4 (2002), pp. 465–484
Abstract
The presented article is about a research using artificial neural network (ANN) methods for compound (technical and fundamental) analysis and prognosis of Lithuania's National Stock Exchange (LNSE) indices LITIN, LITIN-A and LITIN-VVP. We employed initial pre-processing (analysis for entropy and correlation) for filtering out model input variables (LNSE indices, macroeconomic indicators, Stock Exchange indices of other countries such as the USA – Dow Jones and S&P, EU – Eurex, Russia – RTS). Investigations for the best approximation and forecasting capabilities were performed using different backpropagation ANN learning algorithms, configurations, iteration numbers, data form-factors, etc. A wide spectrum of different results has shown a high sensitivity to ANN parameters. ANN autoregressive, autoregressive causative and causative trend model performances were compared in the approximation and forecasting by a linear discriminant analysis.
Journal:Informatica
Volume 13, Issue 2 (2002), pp. 177–208
Abstract
The objective of expert systems is the use of Artificial Intelligence tools so as to solve problems within specific prefixed applications. Even when such systems are widely applied in diverse applications, as manufacturing or control systems, until now, there is an important gap in the development of a theory being applicable to a description of the involved problems in a unified way. This paper is an attempt in supplying a simple formal description of expert systems together with an application to a robot manipulator case.
Journal:Informatica
Volume 5, Issues 1-2 (1994), pp. 241–255
Abstract
Neural networks are often characterized as highly nonlinear systems of fairly large amount of parameters (in order of 103 – 104). This fact makes the optimization of parameters to be a nontrivial problem. But the astonishing moment is that the local optimization technique is widely used and yields reliable convergence in many cases. Obviously, the optimization of neural networks is high-dimensional, multi-extremal problem, so, as usual, the global optimization methods would be applied in this case. On the basis of Perceptron-like unit (which is the building block for the most architectures of neural networks) we analyze why the local optimization technique is so successful in the field of neural networks. The result is that a linear approximation of the neural network can be sufficient to evaluate the start point for the local optimization procedure in the nonlinear regime. This result can help in developing faster and more robust algorithms for the optimization of neural network parameters.