Journal:Informatica
Volume 31, Issue 3 (2020), pp. 579–595
Abstract
One of the results of the evolution of business process management (BPM) is the development of information technology (IT), methodologies and software tools to manage all types of processes – from traditional, structured processes to unstructured processes, for which it is not possible to define a detailed flow as a sequence of tasks to be performed before implementation. The purpose of the article is to present the evolution of intelligent BPM systems (iBPMS) and dynamic case management/adaptive case management systems (DCMS/ACMS) and show that they converge into one class of systems, additionally absorbing new emerging technologies such as process mining, robotic process automation (RPA), or machine learning/artificial intelligence (ML/AI). The content of research reports on iBPMS and DCMS systems by Gartner and Forrester consulting companies from the last 10 years was analysed. The nature of this study is descriptive and based solely on information from secondary data sources. It is an argumentative paper, and the study serves as the arguments that relate to the main research questions. The research results reveal that under business pressure, the evolution of both classes of systems (iBPMS and DCMS/ACMS) tends to cover the functionality of the same area of requirements by enabling the support of processes of different nature. This de facto means the creation of one class of systems, although for marketing reasons, some vendors will still offer separate products for some time to come. The article shows that the main driver of unified software system development is not the new possibilities offered by IT, but the requirements imposed on BPM by the increasingly stronger impact of knowledge management (KM) with regard to the way business processes are executed. Hence the anticipation of the further evolution of methodologies and BPM supporting systems towards integration with KM and elements of knowledge management systems (KMS). This article presents an original view on the features and development trends of software systems supporting BPM as a consequence of knowledge economy (KE) requirements in accordance with the concept of dynamic BPM.
Journal:Informatica
Volume 31, Issue 3 (2020), pp. 523–538
Abstract
This study aims to evaluate patients with limited state of changes in coronary arteries detected by coronary angiography, the dynamics of these changes over the two years, identify the relevant diagnostic criteria, and assess the efficacy of applied treatment by using speckle tracking echocardiography. Peak radial and circumferential strain and SR (systolic, early, and late diastolic strains) were measured based on the short-axis view; peak longitudinal strain and SR were measured from the apical side of four- two- and three-chamber views. Radial, longitudinal (GLS), circumferential global and regional strains were calculated as an average of measurements. All patients $(n-146)$ were assigned to normal (control) and CAD groups according to cardiac angiography results. 128 of them were evaluated repeatedly after two years. Depending on angiography findings, LAD (85.83%) stenosis predominate, when subsequently fewer instances of RCA (52.5%) or LCX (40.83%) were observed. Most (about 80%) of the patients had one or two-vessel disease and only 20% had systemic all three-vessel disease. Analysis of STE data in groups during a two-year study period showed statistically reliable differences associated with a particular coronary artery. In the control group: RCA – myocardial circumferential strain $(p-0.037)$; LAD – no changes; LCX – early $(p-0.013)$ and late diastolic longitudinal $(p-0.033)$ strains. Subsequently, in the CAD group: RCA – diastolic circumferential strain rate $(p-0.007)$; LAD – myocardial longitudinal strain $(p-0.006)$, systolic longitudinal $(p-0.038)$ and circumferential strain $(p-0.012)$ rates, early diastolic circumferential $(p-0.008)$ and late diastolic longitudinal $(p-0.037)$ strain rates; LCX – myocardial longitudinal $(p-0.049)$ strain. Between groups, we detected significant changes in such circumferential strain rates, respectively: RCA – systolic $(p=0.037)$, early diastolic $(p=0.019)$, and late diastolic $(p=0.024)$ strain rates; LAD – no changes; LCX – early diastolic longitudinal strain $(p-0.004)$. The clinical condition of our patients over the two years has improved both in control and CAD groups, according to GLS. We hold the opinion that microvascular angina (MVA) may be responsible for such an improvement because the main diagnostic criteria and common treatment with ACE inhibitors, statins, β-blockers, antithrombotic, and nitrates was typical and effective for MVA treatment.
Journal:Informatica
Volume 31, Issue 4 (2020), pp. 751–768
Abstract
In cryptography, key establishment protocols are often the starting point paving the way towards secure execution of different tasks. Namely, the parties seeking to achieve some cryptographic task, often start by establishing a common high-entropy secret that will eventually be used to secure their communication. In this paper, we put forward a security model for group key establishment ($\mathsf{GAKE}$) with an adversary that may execute efficient quantum algorithms, yet only once the execution of the protocol has concluded. This captures a situation in which keys are to be established in the present, while security guarantees must still be provided in the future when quantum resources may be accessible to a potential adversary.
Further, we propose a protocol design that can be proven secure in this model. Our proposal uses password authentication and builds upon efficient and reasonably well understood primitives: a message authentication code and a post-quantum key encapsulation mechanism. The hybrid structure dodges potential efficiency downsides, like large signatures, of some “true” post-quantum authentication techniques, making our protocol a potentially interesting fit for current applications with long-term security needs.
Journal:Informatica
Volume 31, Issue 4 (2020), pp. 821–839
Abstract
Ligand Based Virtual Screening methods are widely used in drug discovery as filters for subsequent in-vitro and in-vivo characterization. Since the databases processed are enormously large, this pre-selection process requires the use of fast and precise methodologies. In this work, the similarity between compounds is measured in terms of electrostatic potential. To do so, we propose a new and alternative methodology, called LBVS-Electrostatic. Accordingly to the obtained results, we are able to conclude that many of the compounds proposed with our novel approach could not be discovered with the classical one.
Journal:Informatica
Volume 31, Issue 3 (2020), pp. 621–658
Abstract
As the tourism and mobile internet develop, car sharing is becoming more and more popular. How to select an appropriate car sharing platform is an important issue to tourists. The car sharing platform selection can be regarded as a kind of multi-attribute group decision making (MAGDM) problems. The probabilistic linguistic term set (PLTS) is a powerful tool to express tourists’ evaluations in the car sharing platform selection. This paper develops a probabilistic linguistic group decision making method for selecting a suitable car sharing platform. First, two aggregation operators of PLTSs are proposed. Subsequently, a fuzzy entropy and a hesitancy entropy of a PLTS are developed to measure the fuzziness and hesitancy of a PLTS, respectively. Combining the fuzzy entropy and hesitancy entropy, a total entropy of a PLTS is generated. Furthermore, a cross entropy between PLTSs is proposed as well. Using the total entropy and cross entropy, DMs’ weights and attribute weights are determined, respectively. By defining preference functions with PLTSs, an improved PL-PROMETHEE approach is developed to rank alternatives. Thereby, a novel method is proposed for solving MAGDM with PLTSs. A car sharing platform selection is examined at length to show the application and superiority of the proposed method.
Pub. online:22 Jun 2020Type:Research ArticleOpen Access
Journal:Informatica
Volume 32, Issue 1 (2021), pp. 69–84
Abstract
Clinics and hospitals have already adopted more technological resources to provide a faster and more precise diagnostic for patients, health care providers, and institutes of medicine. Security issues get more and more important in medical services via communication resources such as Wireless-Fidelity (Wi-Fi), third generation of mobile telecommunications technology (3G), and other mobile devices to connect medical systems from anywhere. Furthermore, cloud-based medical systems allow users to access archived medical images from anywhere. In order to protect medical images, lossless data hiding methods are efficient and easy techniques. In this paper, we present a data hiding of two-tier medical images based on histogram shifting of prediction errors. The median histogram shifting technique and prediction error schemes as the two-tier hiding have high capacity and PSNR in 16-bit medical images.
Pub. online:17 Jun 2020Type:Research ArticleOpen Access
Journal:Informatica
Volume 31, Issue 3 (2020), pp. 561–578
Abstract
This paper presents a non-iterative deep learning approach to compressive sensing (CS) image reconstruction using a convolutional autoencoder and a residual learning network. An efficient measurement design is proposed in order to enable training of the compressive sensing models on normalized and mean-centred measurements, along with a practical network initialization method based on principal component analysis (PCA). Finally, perceptual residual learning is proposed in order to obtain semantically informative image reconstructions along with high pixel-wise reconstruction accuracy at low measurement rates.
Pub. online:17 Jun 2020Type:Research ArticleOpen Access
Journal:Informatica
Volume 31, Issue 3 (2020), pp. 499–522
Abstract
A $(k,n)$-threshold secret image sharing scheme is any method of distributing a secret image amongst n participants in such a way that any k participants are able to use their shares collectively to reconstruct the secret image, while fewer than k shares do not reveal any information about the secret image. In this work, we propose a lossless linear algebraic $(k,n)$-threshold secret image sharing scheme. The scheme associates a vector ${\mathbf{v}_{i}}$ to the ith participant in the vector space ${\mathbb{F}_{{2^{\alpha }}}^{k}}$, where the vectors ${\mathbf{v}_{i}}$ satisfy some admissibility conditions. The ith share is simply a linear combination of the vectors ${\mathbf{v}_{i}}$ with coefficients from the secret image. Simulation results demonstrate the effectiveness and robustness of the proposed scheme compared to standard statistical attacks on secret image sharing schemes. Furthermore, the proposed scheme has a high level of security, error-resilient capability, and the size of each share is $1/k$ the size of the secret image. In comparison with existing work, the scheme is shown to be very competitive.
Pub. online:8 Jun 2020Type:Research ArticleOpen Access
Journal:Informatica
Volume 31, Issue 4 (2020), pp. 857–880
Abstract
Normalization and aggregation are two most important issues in multi-criteria analysis. Although various multi-criteria decision-making (MCDM) methods have been developed over the past several decades, few of them integrate multiple normalization techniques and mixed aggregation approaches at the same time to reduce the deviations of evaluation values and enhance the reliability of the final decision result. This study is dedicated to introducing a new MCDM method called Mixed Aggregation by COmprehensive Normalization Technique (MACONT) to tackle complicate MCDM problems. This method introduces a comprehensive normalization technique based on criterion types, and then uses two mixed aggregation operators to aggregate the distance values between each alternative and the reference alternative on different criteria from the perspectives of compensation and non-compensation. An illustrative example is given to show the applicability of the proposed method, and the advantages of the proposed method are highlighted through sensitivity analyses and comparative analyses.
Pub. online:8 Jun 2020Type:Research ArticleOpen Access
Journal:Informatica
Volume 32, Issue 1 (2021), pp. 163–193
Abstract
To solve the problem of choosing the appropriate cloud computing vendors in small and medium-sized enterprises (SMEs), this paper boils it down to a group decision making (GDM) problem. To facilitate the judgment, this paper uses preference relation as the decision making technology. Considering the situation where uncertain positive and negative judgments exist simultaneously, interval-valued intuitionistic fuzzy preference relations (IVIFPRs) are employed to express the decision makers’ judgments. In view of the multiplicative consistency and consensus analysis, a new GDM algorithm with IVIFPRs is offered. To accomplish this goal, a new multiplicative consistency is first defined, which can avoid the limitations of the previous ones. Then, a programming model is built to check the consistency of IVIFPRs. To deal with incomplete IVIFPRs, two programming models are constructed to determine the missing values with the goal of maximizing the level of multiplicative consistency and minimizing the total uncertainty. To achieve the minimum adjustment of original preference information, a programming model is established to repair inconsistent IVIFPRs. In addition, programming models for getting the decision makers (DMs)’ weights and improving the consensus degree are offered. Finally, a practical decision making example is given to illustrate the effectiveness of the proposed method and to compare it with previous methods.