Pub. online:6 May 2020Type:Research ArticleOpen Access
Journal:Informatica
Volume 31, Issue 3 (2020), pp. 435–458
Abstract
In data mining research, outliers usually represent extreme values that deviate from other observations on data. The significant issue of existing outlier detection methods is that they only consider the object itself not taking its neighbouring objects into account to extract location features. In this paper, we propose an innovative approach to this issue. First, we propose the notions of centrality and centre-proximity for determining the degree of outlierness considering the distribution of all objects. We also propose a novel graph-based algorithm for outlier detection based on the notions. The algorithm solves the problems of existing methods, i.e. the problems of local density, micro-cluster, and fringe objects. We performed extensive experiments in order to confirm the effectiveness and efficiency of our proposed method. The obtained experimental results showed that the proposed method uncovers outliers successfully, and outperforms previous outlier detection methods.
Journal:Informatica
Volume 31, Issue 3 (2020), pp. 459–479
Abstract
After Morris and Thompson wrote the first paper on password security in 1979, strict password policies have been enforced to make sure users follow the rules on passwords. Many such policies require users to select and use a system-generated password. The objective of this paper is to analyse the effectiveness of strict password management policies with respect to how users remember system-generated passwords of different textual types – plaintext strings, passphrases, and hybrid graphical-textual PsychoPass passwords. In an experiment, participants were assigned a random string, passphrase, and PsychoPass passwords and had to memorize them. Surprisingly, no one has remembered either the random string or the passphrase, whereas only 10% of the participants remembered their PsychoPass password. The policies where administrators let systems assign passwords to users are not appropriate. Although PsychoPass passwords are easier to remember, the recall rate of any system-assigned password is below the acceptable level. The findings of this study explain that system-assigned strong passwords are inappropriate and put unacceptable memory burden on users.
Pub. online:6 May 2020Type:Research ArticleOpen Access
Journal:Informatica
Volume 31, Issue 3 (2020), pp. 481–497
Abstract
Data hiding technique is an important multimedia security technique and has been applied to many domains, for example, relational databases. The existing data hiding techniques for relational databases cannot restore raw data after hiding. The purpose of this paper is to propose the first reversible hiding technique for the relational database. In hiding phase, it hides confidential messages into a relational database by the LSB (Least-Significant-Bit) matching method for relational databases. In extraction and restoration phases, it gets the confidential messages through the LSB and LSB matching method for relational databases. Finally, the averaging method is used to restore the raw data. According to the experiments, our proposed technique meets data hiding requirements. It not only enables to recover the raw data, but also maintains a high hiding capacity. The complexity of our algorithms shows their efficiencies.
Pub. online:17 Jun 2020Type:Research ArticleOpen Access
Journal:Informatica
Volume 31, Issue 3 (2020), pp. 499–522
Abstract
A $(k,n)$-threshold secret image sharing scheme is any method of distributing a secret image amongst n participants in such a way that any k participants are able to use their shares collectively to reconstruct the secret image, while fewer than k shares do not reveal any information about the secret image. In this work, we propose a lossless linear algebraic $(k,n)$-threshold secret image sharing scheme. The scheme associates a vector ${\mathbf{v}_{i}}$ to the ith participant in the vector space ${\mathbb{F}_{{2^{\alpha }}}^{k}}$, where the vectors ${\mathbf{v}_{i}}$ satisfy some admissibility conditions. The ith share is simply a linear combination of the vectors ${\mathbf{v}_{i}}$ with coefficients from the secret image. Simulation results demonstrate the effectiveness and robustness of the proposed scheme compared to standard statistical attacks on secret image sharing schemes. Furthermore, the proposed scheme has a high level of security, error-resilient capability, and the size of each share is $1/k$ the size of the secret image. In comparison with existing work, the scheme is shown to be very competitive.
Journal:Informatica
Volume 31, Issue 3 (2020), pp. 523–538
Abstract
This study aims to evaluate patients with limited state of changes in coronary arteries detected by coronary angiography, the dynamics of these changes over the two years, identify the relevant diagnostic criteria, and assess the efficacy of applied treatment by using speckle tracking echocardiography. Peak radial and circumferential strain and SR (systolic, early, and late diastolic strains) were measured based on the short-axis view; peak longitudinal strain and SR were measured from the apical side of four- two- and three-chamber views. Radial, longitudinal (GLS), circumferential global and regional strains were calculated as an average of measurements. All patients $(n-146)$ were assigned to normal (control) and CAD groups according to cardiac angiography results. 128 of them were evaluated repeatedly after two years. Depending on angiography findings, LAD (85.83%) stenosis predominate, when subsequently fewer instances of RCA (52.5%) or LCX (40.83%) were observed. Most (about 80%) of the patients had one or two-vessel disease and only 20% had systemic all three-vessel disease. Analysis of STE data in groups during a two-year study period showed statistically reliable differences associated with a particular coronary artery. In the control group: RCA – myocardial circumferential strain $(p-0.037)$; LAD – no changes; LCX – early $(p-0.013)$ and late diastolic longitudinal $(p-0.033)$ strains. Subsequently, in the CAD group: RCA – diastolic circumferential strain rate $(p-0.007)$; LAD – myocardial longitudinal strain $(p-0.006)$, systolic longitudinal $(p-0.038)$ and circumferential strain $(p-0.012)$ rates, early diastolic circumferential $(p-0.008)$ and late diastolic longitudinal $(p-0.037)$ strain rates; LCX – myocardial longitudinal $(p-0.049)$ strain. Between groups, we detected significant changes in such circumferential strain rates, respectively: RCA – systolic $(p=0.037)$, early diastolic $(p=0.019)$, and late diastolic $(p=0.024)$ strain rates; LAD – no changes; LCX – early diastolic longitudinal strain $(p-0.004)$. The clinical condition of our patients over the two years has improved both in control and CAD groups, according to GLS. We hold the opinion that microvascular angina (MVA) may be responsible for such an improvement because the main diagnostic criteria and common treatment with ACE inhibitors, statins, β-blockers, antithrombotic, and nitrates was typical and effective for MVA treatment.
Journal:Informatica
Volume 31, Issue 3 (2020), pp. 539–560
Abstract
In this paper, we present an effective algorithm for solving the Poisson–Gaussian total variation model. The existence and uniqueness of solution for the mixed Poisson–Gaussian model are proved. Due to the strict convexity of the model, the split-Bregman method is employed to solve the minimization problem. Experimental results show the effectiveness of the proposed method for mixed Poisson–Gaussion noise removal. Comparison with other existing and well-known methods is provided as well.
Pub. online:17 Jun 2020Type:Research ArticleOpen Access
Journal:Informatica
Volume 31, Issue 3 (2020), pp. 561–578
Abstract
This paper presents a non-iterative deep learning approach to compressive sensing (CS) image reconstruction using a convolutional autoencoder and a residual learning network. An efficient measurement design is proposed in order to enable training of the compressive sensing models on normalized and mean-centred measurements, along with a practical network initialization method based on principal component analysis (PCA). Finally, perceptual residual learning is proposed in order to obtain semantically informative image reconstructions along with high pixel-wise reconstruction accuracy at low measurement rates.
Journal:Informatica
Volume 31, Issue 3 (2020), pp. 579–595
Abstract
One of the results of the evolution of business process management (BPM) is the development of information technology (IT), methodologies and software tools to manage all types of processes – from traditional, structured processes to unstructured processes, for which it is not possible to define a detailed flow as a sequence of tasks to be performed before implementation. The purpose of the article is to present the evolution of intelligent BPM systems (iBPMS) and dynamic case management/adaptive case management systems (DCMS/ACMS) and show that they converge into one class of systems, additionally absorbing new emerging technologies such as process mining, robotic process automation (RPA), or machine learning/artificial intelligence (ML/AI). The content of research reports on iBPMS and DCMS systems by Gartner and Forrester consulting companies from the last 10 years was analysed. The nature of this study is descriptive and based solely on information from secondary data sources. It is an argumentative paper, and the study serves as the arguments that relate to the main research questions. The research results reveal that under business pressure, the evolution of both classes of systems (iBPMS and DCMS/ACMS) tends to cover the functionality of the same area of requirements by enabling the support of processes of different nature. This de facto means the creation of one class of systems, although for marketing reasons, some vendors will still offer separate products for some time to come. The article shows that the main driver of unified software system development is not the new possibilities offered by IT, but the requirements imposed on BPM by the increasingly stronger impact of knowledge management (KM) with regard to the way business processes are executed. Hence the anticipation of the further evolution of methodologies and BPM supporting systems towards integration with KM and elements of knowledge management systems (KMS). This article presents an original view on the features and development trends of software systems supporting BPM as a consequence of knowledge economy (KE) requirements in accordance with the concept of dynamic BPM.
Journal:Informatica
Volume 31, Issue 3 (2020), pp. 597–620
Abstract
Very recently, side-channel attacks have threatened all traditional cryptographic schemes. Typically, in traditional cryptography, private/secret keys are assumed to be completely hidden to adversaries. However, by side-channel attacks, an adversary may extract fractional content of these private/secret keys. To resist side-channel attacks, leakage-resilient cryptography is a countermeasure. Identity-based public-key system (ID-PKS) is an attractive public-key setting. ID-PKS settings not only discard the certificate requirement, but also remove the construction of the public-key infrastructure. For solving the user revocation problem in ID-PKS settings, revocable ID-PKS (RID-PKS) setting has attracted significant attention. Numerous cryptographic schemes based on RID-PKS settings have been proposed. However, under RID-PKS settings, no leakage-resilient signature or encryption scheme is proposed. In this article, we present the first leakage-resilient revocable ID-based signature (LR-RIBS) scheme with cloud revocation authority (CRA) under the continual leakage model. Also, a new adversary model of LR-RIBS schemes with CRA is defined. Under this new adversary model, security analysis is made to demonstrate that our LR-RIBS scheme with CRA is provably secure in the generic bilinear group (GBG) model. Finally, performance analysis is made to demonstrate that our scheme is suitable for mobile devices.
Journal:Informatica
Volume 31, Issue 3 (2020), pp. 621–658
Abstract
As the tourism and mobile internet develop, car sharing is becoming more and more popular. How to select an appropriate car sharing platform is an important issue to tourists. The car sharing platform selection can be regarded as a kind of multi-attribute group decision making (MAGDM) problems. The probabilistic linguistic term set (PLTS) is a powerful tool to express tourists’ evaluations in the car sharing platform selection. This paper develops a probabilistic linguistic group decision making method for selecting a suitable car sharing platform. First, two aggregation operators of PLTSs are proposed. Subsequently, a fuzzy entropy and a hesitancy entropy of a PLTS are developed to measure the fuzziness and hesitancy of a PLTS, respectively. Combining the fuzzy entropy and hesitancy entropy, a total entropy of a PLTS is generated. Furthermore, a cross entropy between PLTSs is proposed as well. Using the total entropy and cross entropy, DMs’ weights and attribute weights are determined, respectively. By defining preference functions with PLTSs, an improved PL-PROMETHEE approach is developed to rank alternatives. Thereby, a novel method is proposed for solving MAGDM with PLTSs. A car sharing platform selection is examined at length to show the application and superiority of the proposed method.