Quantum computing has come to stay in our lives. Companies are investing billions of dollars in it because of the potential benefits that it can achieve, providing promising applications in almost every business sector. Although quantum computing is evolving at an exponential rate, the development of tools, techniques, or frameworks for the evolution of current information systems towards quantum software systems is still proving to be a challenge. This research contributes to the evolution of current information systems towards hybrid information systems (combining the classical and quantum computing paradigm). We propose a software modernization process, by following model-driven engineering principles, adapted to the quantum paradigm, based on modified versions of standards for reverse engineering of classical, quantum software assets, and for the design of the target system. In particular, this paper focuses on the restructuring transformation from KDM to UML models, where KDM models have been generated from Q# code. This proposal has been validated through a case study involving 17 programmes. The results obtained show optimistic values regarding the complexity of the UML models generated, their expressiveness and scalability. The main implication of this research is that UML models can indeed help the software evolution of/toward hybrid information systems.
Pub. online:6 Nov 2023Type:Research ArticleOpen Access
Journal:Informatica
Volume 34, Issue 4 (2023), pp. 795–824
Abstract
Master data has been revealed as one of the most potent instruments to guarantee adequate levels of data quality. The main contribution of this paper is a data quality model to guide repeatable and homogeneous evaluations of the level of data quality of master data repositories. This data quality model follows several international open standards: ISO/IEC 25012, ISO/IEC 25024, and ISO 8000-1000, enabling compliance certification. A case study of applying the data quality model to an organizational master data repository has been carried out to demonstrate the applicability of the data quality model.
Pub. online:2 Jun 2021Type:Research ArticleOpen Access
Journal:Informatica
Volume 32, Issue 3 (2021), pp. 619–660
Abstract
Code repositories contain valuable information, which can be extracted, processed and synthesized into valuable information. It enabled developers to improve maintenance, increase code quality and understand software evolution, among other insights. Certain research has been made during the last years in this field. This paper presents a systematic mapping study to find, evaluate and investigate the mechanisms, methods and techniques used for the analysis of information from code repositories that allow the understanding of the evolution of software. Through this mapping study, we have identified the main information used as input for the analysis of code repositories (commit data and source code), as well as the most common methods and techniques of analysis (empirical/experimental and automatic). We believe the conducted research is useful for developers working on software development projects and seeking to improve maintenance and understand the evolution of software through the use and analysis of code repositories.