1 Introduction
Fuzzy sets have been used greatly in different scientific areas. In their study, authors of Kahraman
et al. (
2016) found many application areas in theoretical and practical studies, like engineering, arts, humanities, computer sciences, health science, life sciences, physical sciences, etc. They present a comprehensive literature review on the fuzzy set theory realization covering 50 years since Zadeh had proposed it in 1965. The authors have not analysed any complexity issues of the fuzzy set theory application and realization. However, the authors have highlighted the need for a standard notation of the fuzzy set theory. Various authors frequently encounter different notations for the same concepts of the theory in their publications. A standard notation will improve the theory’s value and will be a significant step to provide the condition that has already been implemented in the classical logic. Another problem defined by the authors of Kahraman
et al. (
2016) is the segregation between classical logic publications and fuzzy logic publications. Moreover, there can be found too many different fuzzy models and approaches in the literature without sufficient discussions about their correctness or deficiencies that may create a doubtful point of view to the theory (Kahraman
et al.,
2016). One of the main reasons for these problems is the complexity of the fuzzy set theory and its application in different areas. Therefore, there is a need for a more comprehensive study of the fuzzy inference systems development complexity issues and their systematization in this context.
We can find plenty of approaches to develop FIS in the literature automatically (Ruiz-Garcia
et al.,
2019; Lee,
2019; Mirko
et al.,
2019; Askari,
2017). Authors of these approaches point out various limitations, issues, or drawbacks of developing FIS because of its complexity, like the sparse rule base (RB) (Antonelli
et al.,
2010), high dimensional data (Alcalá
et al.,
2009b), a vast number of linguistic terms (Askari,
2017; Ephzibah,
2011), etc. All those complexity issues complicate the automatic development of FIS and need to be solved. However, in the fuzzy set theory application field, they are not investigated sufficiently in a systematic and comprehensive way. Authors of the analysed papers have focused on solving a particular task, like reducing computational complexity through decreasing the number of fuzzy rules (Ruiz-Garcia
et al.,
2019; Zhu
et al.,
2017; Harandi and Derhami,
2016; Bouchachia and Vanaret,
2014) or reducing MFs (Fan
et al.,
2019; Ibarra
et al.,
2015), etc. This lack of understanding of a general situation hampers progress in the analysed field since academics offer limited approaches (Ivarsson and Gorschek,
2011).
Consequently, the following research questions arise: 1) “
What complexity issues exist in the context of developing fuzzy inference systems (
FIS)?” (RQ1) and 2) “
Is it possible to systematize existing solutions of identified complexity issues?” (RQ2). In order to answer the defined research questions, this paper presents a synergy of two well-known research methods – a systematic literature review (SLR) and a systematic mapping survey (SMS). SLR is used to perform in-depth analysis and obtain answers for the defined research questions (Kitchenham
et al.,
2009; Mallett
et al.,
2012), and SMS with a keyword map – for providing more general research trends and detecting topics that exist within the analysed field (Petersen
et al.,
2015; Kitchenham
et al.,
2011; Ramaki
et al.,
2018). Moreover, a keyword map allows us to visualize (Linnenluecke
et al.,
2019) and better understand each concept’s real meaning in FIS development, systematize existing solutions of identified complexity issues, and develop the framework of complexity issues and their possible solutions in FIS development.
This research increases the body of knowledge on FIS development theory by providing a systematic view of complexity issues and their existing solutions. New trends in developing FIS are uncovered as well. Additionally, the results of this research can help researchers and practitioners become familiar with found FIS complexity issues and their possible solutions. Our main scientific contribution and advantages of this paper are as follows:
-
1. The complexity issues in FIS development are found and systematized.
-
2. The solutions for the found complexity issues in FIS development are discovered.
-
3. The framework of FIS development complexity issues and their solutions is proposed.
-
4. The hybrid SLR and SMS approach with a keyword map is employed to answer the research questions at various depths.
The novelty of this research is the systematic view of the found complexity issues in FIS development, the proposed framework of FIS development complexity issues and their solutions. The rest of this paper is structured as follows. Section
2 introduces the main concepts and explains their use in the paper. Section
3 presents related works. Section
4 presents the review method. Section
5 shows the obtained results of hybrid systematic review approach on complexity issues. Section
6 provides the developed framework of FIS development complexity issues and their possible solutions. At last, the discussion is drawn in Section
7, and conclusions in Section
8.
3 Related Works
Authors of Antonelli
et al. (
2011) understand complexity as interpretability of rule base (RB), and interpretability of fuzzy partitions as integrity of the database. Data complexity is measured in terms of the average number of patterns per variable (i.e. data density) for pattern recognition (Ephzibah,
2011). FIS suffers from exponential complexity, manifested through some linguistic terms (number of subspaces on the universe of discourse of input variables) and some input variables (Askari,
2017). Complexity is also measured by counting the number of operations (Ephzibah,
2011) or the number of RB elements, including the number of MFs, rules, premises, linguistic terms, etc. (Askari,
2017). Selecting a small number of right linguistic terms is essential for better interpretability. The total number of parameters of the fuzzy RB is also a measure of interpretability. A system with a smaller number of parameters is more interpretable and less complex (Ishibuchi and Nojima,
2009). In Askari (
2017), Kaynak
et al. (
2002), authors suggest reducing the exponential complexity of FIS by reducing the number of fuzzy (linguistic) terms or the number of fuzzy (linguistic) variables or both. The model interpretability is measured in terms of complexity (Antonelli
et al.,
2016): “
Complexity is affected by the number of features used for generating the model: the lower the number of features, the lower the complexity”. RB complexity is measured as the total number of conditions in the rules’ antecedents (Alcalá
et al.,
2009b).
The most related reviews to the present review are summarized in Table
1. The authors of Liu
et al. (
2017) have suggested developing hierarchical structure-based vague reasoning algorithms to handle complex systems and to reduce its complexity through decomposition and reuse of the whole system. Other findings in Liu
et al. (
2017) are related to the need of the adoption of existing knowledge inference systems for big data and real-time applications, increasing the accuracy of models while reducing reasoning efficiency and computational cost. Authors of D’Urso (
2017) have stated that the higher type fuzzy systems are increasingly complex; therefore, they only focus on type-2 fuzzy sets to reduce the computational complexity. The authors of Shahidah
et al. (
2017) note the importance of a fuzzy system in correlated node behaviour detection with emphasis on the enhancement of the computational complexity and accuracy detection. In Sanchez-Roger
et al. (
2017), authors have mentioned uncertainty of information in the complex environment of the financial field, where fuzzy logic helps to manage those complexities. The authors Rajab and Sharma (
2018) have found in their review that for improving the performance of neuro-fuzzy systems (NFS), various data pre-processing methods, higher order neuro-fuzzy methodologies and various optimization mechanisms were combined with these systems to improve the overall performance. In future, newer efficient input and output processing techniques and different optimization techniques can be applied with various NFS approaches (Rajab and Sharma,
2018).
Table 1
Summary of the analysed literature reviews on complexity issues in FIS.
Reference |
Research method |
Research domain |
Complexity issues |
Solution/Conclusion |
Liu et al. (2017) |
Literature review, RM not presented |
Fuzzy Petri nets (FPN) for knowledge representation |
FPN algorithm complexity increases and depends on the scale of the created FPN model |
Hierarchical structure based reasoning algorithms, combination of FPN with other uncertainty theories |
D’Urso (2017) |
SLR, RM not presented |
Clustering approaches |
Type complexity, computation complexity |
Type reduction, interval type-2 fuzzy sets |
Shahidah et al. (2017) |
SLR, RM (Kitchenham et al., 2009) |
Node behaviour detection in wireless sensor network |
Computational complexity |
General conclusions, automated uncertainty based fault detection and diagnosis approach |
Sanchez-Roger et al. (2017) |
SLR, bibliometric analysis |
Financial field |
Uncertainty of information, complex environment |
General conclusions |
Rajab and Sharma (2018) |
Review |
Neuro-fuzzy systems in business |
The mass and vagueness of datasets, complex/uncertain/unclear/lack real world information |
General conclusions |
However, the complexity issues of FIS development are not analysed in detail in the presented papers. Consequently, there is no systematic and comprehensive review of FIS complexity issues and their solutions, i.e. the analysed papers differ from this study by research questions and their aim.
4 Review Method
SLR was adopted as proposed by Kitchenham
et al. (
2009), and SMS from Petersen
et al. (
2015), Kitchenham and Charters (
2007), Linnenluecke
et al. (
2019), Ramaki
et al. (
2018). The hybrid SLR and SMS approach used in this review is presented in Fig.
2. It consists of four main stages: review design, review conduct, review analysis, and quality assurance (i.e. reducing threats to validity). The first, second, and third stages are done sequentially one after the other with some iterations, if necessary, and the fourth stage is interleaved with the first, second, and third stages. In the rest of this section, the details of those stages are described.
Review design consists of the following activities:
Fig. 2
The hybrid SLR and SMS approach schema.
1. Defining research question. The review begins from the definition of the research questions, which were described in Introduction. This review was conducted from December 2019 until April 2020, so we considered the papers that have been published until January 2020.
2. Defining research scope, inclusion (IC) and criteria (EC), search sources. The scope of our review is all papers that have been prepared in Computer Science (CS), Information Systems (IS), and Software Engineering (SE). Web of Science (WoS) database was chosen as the search source. For more motivation, see Section “Source evaluation”.
Based on the defined scope of the research, papers’ IC and EC are defined as the following:
IC1: Universally accepted relevant works on FIS development, including MFs and fuzzy rules development, construction or generation, issues, limitations or complexity.
IC2: Papers must be open access.
EC1: Exclude papers that do not relate to the FIS development complexity issues, i.e. papers that contain relevant keywords, but FIS issues, limitations or complexity are not discussed in the abstract.
EC2: Exclude duplicate papers that repeat ideas described in earlier works and their abstracts are similar, i.e. if one paper is an extension of another, the less extended (i.e. containing less pages) paper is excluded (Kitchenham,
2004).
EC3: Exclude papers, whose length is less than 10 pages, since such short papers can present only a general idea, but cannot describe the overall approach (Dybå and Dingsøyr,
2008).
EC4: Exclude grey literature, conference cover, posters, and so on (Dybå and Dingsøyr,
2008).
EC5: Exclude papers not in English.
3.
Defining search keywords. In this step, we define relevant keywords, which together with selected sources are going to be used to formulate a search string. The following keyword hierarchy was established in Table
2.
Table 2
Keyword hierarchy.
Main concepts |
Reduced concepts |
Rationale |
Fuzzy Inference System |
fuzzy* |
Specifies overall area of developing and using FIS. |
Membership function |
“membership function*” |
FIS involves generation, construction and development of MFs and fuzzy rules. |
Fuzzy rule |
“fuzzy rule*” |
Development |
develop* |
FIS involves generation, construction and development of MFs and fuzzy rules. |
Generation |
generat* |
Construction |
construct* |
Complexity issue |
issue* |
Those concepts are primary from the RQs. |
Issue |
complex* |
|
Complex |
|
|
4. Defining search string. Here, we have specified the following two search strings:
Search string 1: ((fuzzy*) AND (“membership function*”) AND (“develop*” OR “generat*” OR “construct*”) AND (“issue*” OR “limit*” OR “complex*”)).
Search string 2: ((“fuzzy rule*”) AND (“issue*” OR “limit*” OR “complex*”) AND (“reduc*” OR “optimiz*”)).
Search string 1 is the primary search string used for the initial search. Search string 2 is the secondary search string developed using keywords from Table
2 and refined, by adding reduc* and optimiz*, after backward snowballing.
Review conduct. As shown in Fig.
2, it consists of the following activities:
5.
Running the search string. In this step, the defined search strings are running in WoS engine. For the first iteration, we have used the
search string 1. The output of this search is a primary set of the papers. For the second iteration, we have used the
search string 2, which was developed after applying backward snowballing strategy. The results of the search are presented in Table
3.
6.
Applying IC and EC. Here, the predefined IC and EC were applied to the primary set to obtain only relevant papers on the analysed topic. The input of this step is the primary set of papers. The output is the secondary set of papers. The results of the application of IC and EC are presented in Table
3.
Table 3
Number of obtained papers (Articles (A) or Proceedings Papers (PP)).
Search |
The primary set of papers |
The secondary set of papers |
Years |
A |
PP |
All |
Years |
A |
PP |
All |
Search1 |
1991–2019 |
437 |
278 |
715 |
1993–2019 |
74 |
5 |
79 |
Search2 |
1991–2019 |
366 |
332 |
665 |
1993–2019 |
147 |
3 |
150 |
All |
1991–2019 |
803 |
610 |
1380 |
1993–2019* |
209* |
8* |
217* |
7.
Backward snowballing. In this review, we have applied backward snowballing technique (Jalali and Wohlin,
2012) for searching relevant keywords in titles, abstracts and keywords of the secondary set of papers to improve and refine search strings. As the result of the first iteration, the secondary search string (Search string 2) was developed.
The direct inclusion of the backward snowballing technique in the review method allows us to iteratively evaluate the secondary set of papers. It supplements the search process with additional keywords, including new relevant papers that may be omitted due to too many papers. The backward snowballing technique should be applied until new papers are found or time runs out. In this review, two iterations were performed, since new iterations do not increase the number of relevant papers.
Review analysis. As shown in Fig.
2, the data (i.e. the primary set of complexity issues) analysis consists of two parallel branches, i.e. SLR and SMS. It consists of the following activities:
8.
Extracting data. This step is conventional to the data extraction in SLR (Petersen
et al.,
2015). It covers extracting initial data from Abstracts and tabulating it. The input of this step is Abstracts of the secondary set of papers. The output is the primary set of complexity issues (see Fig.
4).
9.
Performing data analysis. This step covers the analysis of the extracted data. The data extracted from the papers was tabulated and plotted to present basic information on RQs. The obtained initial set of complexity issues was grouped into categories, based on Fig.
1.
10.
Creating a keyword map based on abstracts. This step corresponds to the keyword mapping activity. It is described in detail in Section
5.3. The input of this step is abstracts of the secondary set of papers and the fuzzy keyword thesaurus specially developed for this review (see Section
5.3). The output of this step is the keyword map (see Fig.
5).
11.
Performing analysis of the keywords map. This step covers analysis of the keyword map. For more details, see Section
5.3. The input of this step is the keyword map. The output is the main trends (see Section
5.3). The main results of this step are presented in Section
5.
12.
Synchronizing results. In this step, we are synchronizing the obtained results and developing the complexity issues framework (see Section
6).
13. Drawing conclusions. Here, the final conclusions and discussion regarding RQs are drawn.
In Fig.
3, the trend of the research on the topic is illustrated. The number of papers on FIS complexity issues has risen in the period during 2011–2019.
Source evaluation. Authors of Gusenbauer and Haddaway (
2020) have compared 28 widely used academic search systems and found that only 14 of 28 are well-suited to SLR, since they met all necessary performance requirements. Among those 14 systems, emphasizing the Computer Science research area, the principal search systems are the following: ACM Digital Library, Bielefeld Academic Search Engine (BASE), ScienceDirect, Scopus, WoS and Wiley Online Library. For this review, we have compared these search systems according to the following criteria: overlapping, scope, quality of the presented research and possibility of full download (not separate download) of search results for bibliometric analysis.
Fig. 3
Number of obtained papers (Poly. (All) – is a trend line).
BASE is useful for users without access to paywalled content (Gusenbauer,
2019), which means that large portions of the academic web are not represented. The publisher Elsevier owns both ScienceDirect and Scopus, but Scopus provides possibility to writing more sophisticated strings than ScienceDirect. According to Martín-Martín
et al. (
2018), WoS and Scopus databases are not overlapping only 12,2% of documents in Engineering and Computer Science. ACM Digital Library, BASE and Wiley Online Library, having a large number of proceedings publications, lose their advantage because of our predefined EC2, EC3 and EC4 (Dybå and Dingsøyr,
2008). For ensuring quality of publications, databases use different types of impact factors as the following: WoS has an Impact Factor (IF), Scopus has its own CiteScore, which is an alternative to WoS IF. ACM Digital Library and Wiley Online Library count only total citations for each publication. The analysis of the possibility for full download of search results shows that WoS and Scopus have the most carefully arranged bibliographic data. WoS allows downloading up to 500 items per time, Scopus – full downloading. Summing up all advantages and disadvantages and taking into consideration time and performance constraints, WoS was chosen for this review.
Threats to validity. Here, we discuss the potential threats to validity of this review together with their mitigation actions we have taken.
Construct validity refers to the concepts being studied. When defining the review scope and keywords, we faced uncertainty about whether researchers refer to the FIS complexity issues or usage of FIS to solve particular problem domain issues. Consequently, the primary analysis of papers was done to familiarize with the FIS complexity issues and define the related keywords more precisely. Some of the main related works are presented in Section
3. We have used WoS for the search since it enables us to find the most suitable, complete, and not duplicate high-quality refereed papers. For dealing with validity threats regarding the search string (i.e. missing keywords leading to the exclusion of relevant papers), we carried out the primary study during preparation (Miliauskaitė and Kalibatiene,
2020a). Moreover, after performing a first iteration of the search, we have applied the backward snowballing technique to develop a new search string from the already included papers. As a result, we have obtained search string 2 for the next iteration of the search. Finally, considering the significant number of the primary set of papers (1380), we have decided that our results and findings are valuable for providing researchers and practitioners with an overview of the state of the art of FIS complexity issues. An
internal threat to validity in this research refers mainly to the individual researcher’s bias in 1) deciding whether to include or exclude a paper into the secondary set, 2) classifying it according to the complexity issues, and 3) analysing the results. We have used a clearly defined searching strategy, assessed the obtained results independently, and combined the results to minimize the researcher’s bias.
External validity refers to this review’s results and conclusions. They are only valid for the FIS, whose understanding is described in Section
2. We have made great efforts to systematically set up the review protocol and apply it to ensure those general conclusions are valid irrespective of the lack of consensus.
Used tools. Various researchers, like (Li
et al.,
2017; Chen,
2018; Chen
et al.,
2019; Vilutiene
et al.,
2019), have used different science mapping tools, including VOSviewer, BibExcel, CiteSpace, CoPalRed, Sci2, VantagePoint, and Gephi, for analysing, mapping, and visualization of bibliographic data. A detailed review of visualization tools is not the main aim of this paper. We used VOSviewer as an analysis tool. VOSviewer generates a network from the given bibliographic data. All networks consist of nodes and links. Nodes present documents (i.e. articles), sources (i.e. journals), authors, organizations, countries, or keywords. Nodes with a higher number of occurrences are bigger. Links present relationships among nodes. Thicker links present closer relationships among nodes. Closely related nodes are combined into clusters using a smart local moving algorithm presented in Waltman and Van Eck (
2013).
7 Discussion
As FIS become increasingly complex because of their application domain and tasks being solved, FIS development complexity issues need to be systematized and classified to ensure its development efficiency and effectiveness. In this research, we have applied a hybrid SLR and SMS to answer the defined research questions: (RQ1) What complexity issues exist in the context of developing fuzzy inference systems (FIS)? and (RQ2) Is it possible to systematize existing solutions of identified complexity issues? The conducted review on the topic shows an increase of papers analysing different complexity issues in FIS development. It can be attributed to technological development and raise the applicability of fuzzy theory to present uncertainties in various application domains.
Finally, we can summarize the obtained results and answer the research questions. Four main issues have been found in the reviewed papers (RQ1): 1) computational complexity (CC), 2) complexity of fuzzy rules (CFR), 3) complexity of MF development (CMF), and 4) data complexity (DC). These complexity issues did not occur with equal frequency in the analysed papers as the following: CFR had the highest occurrence, CMF – high occurrence, CC – moderate occurrence, and DC – low occurrence. Here, it is necessary to discuss why we have this phenomenon of circumstances. CFR and CMF have occurred mostly, since MFs and fuzzy rules development are central in developing FIS (Fig.
1). The defined MFs and fuzzy rules form a fuzzy model impacting the FIS inferencing results. Thus, the more precise the definition of MFs and fuzzy rules is, the more accurate results and the more efficient FIS we will get. Moreover, we express domain knowledge through MFs and fuzzy rules. Therefore, in our proposed framework of complexity issues and their possible solutions in FIS development, CMF and CFR are generalized as
Knowledge Complexity. CC has occurred moderately. It is an important complexity issue in FIS development, since we have to perform numerous calculations in each FIS component. However, in the analysed papers, not all authors mention this complexity issue in their abstracts. They highlight CFR and CMF and believe that their solution enables the reduction of CC. Although solving CFR and CMF does not entirely reduce CC, it depends on calculations in other components of FIS, like defuzzification, but reduces it significantly. Consequently, to increase CC, we need an efficient way of developing all FIS components. DC has the lowest occurrence in the analysed papers. DC is a global issue going beyond FIS. Since we limit our search strings to FIS but not to all software systems, we got a small number of papers highlighting DC. Moreover, as observed, FIS is usually used to address DC, rather than DC being the FIS development complexity issue. Therefore, we need to extend the review with new keywords, like “high-dimensional data,” “big data,” “noisy data,” etc., to analyse DC more accurately. However, it is outside the scope of this review.
We have performed the co-relationship analysis between different pairs, triples, and quadruples to determine the most related complexity issues. One particular complexity issue is most relevant in the analysed papers (found in 217 papers, 67.74%). Pairs of complexity issues are analysed in 64 papers (29.49%), triplets – in 6 papers (2.76%), and a quadruple – not found. This comparison allows us to state that authors tend to analyse one particular issue, less often – two or three related issues, and do not analyse all the issues in conjunction. The authors choose the complexity issues that are most relevant to their research and reduce them using a particular approach. Besides, they expect that reducing the most relevant complexity issue will reduce other related complexity issues. Therefore, in the domain for in-depth knowledge elicitation, a deeper analysis of causal relationships among complexity issues is necessary by applying the causality-driven methods (Gudas
et al.,
2019) and determining fuzzy relations (Ferrera-Cedeño
et al.,
2019) among complexity issues.
This paper has proposed the framework of complexity issues and their possible solutions in FIS development (RQ2). It allows us to systematize existing solutions for each complexity issue found in RQ1. As can be seen, the same solutions, like genetic algorithms, neural networks, approximation and optimization techniques, etc., are used to solve different complexity issues. From a global perspective, the found complexity issues are not new in CS. Therefore, to solve those complexity issues, we propose to employ well-known
Possible general solutions (Fig.
12). Moreover, they show the directions in which FIS can and should be developed and refined. Summarizing, complexity issues in FIS development should be solved by searching for new and applying already known approaches from more general fields, like computational complexity theory, parallel computing techniques, information granularity techniques, knowledge management techniques, etc.
One more result of this review is the application of the hybrid SLR and SMS approach. The advantage of the applied approach is that it includes SLR’s advantageous properties to perform in-depth analysis for answering the defined research questions and SMS with a keyword map visualizing and better understanding each concept’s real meaning in FIS development.
The overall advantage of the results obtained in this paper is that they enhance knowledge on FIS development and help the researchers and practitioners become familiar with the found FIS development complexity issues and their possible solutions.
7.1 Limitations of the Review
The most common limitations of systematic reviews are related to coverage of search and to possible biases introduced during study selection, data extraction, and analysis. These are also the main limitations of this review. The coverage of the searching threat is related mainly to the search Source of this review, since for the search, we have chosen only one search engine, i.e. WoS. However, in Section
3, we have argued WoS selection. Moreover, our iterative search allows us to obtain a significant number of the primary set of papers (1380), which is a sufficient size of an initial set to perform the review. Moreover, we concentrate on the papers published in Computer Science, Information Systems, and Software Engineering. Although performing a systematic review of other fields was not the aim of our review, we understand that important papers might have been excluded from our discussions. A systematic review performed manually across all fields may not be feasible. Consequently, we intend to conduct a systematic review focusing on a field related to FIS development and keeping the review process feasible.
We addressed potential research bias in assessing papers, data extraction, and data analysis by strictly following the predefined review protocol, assessing the obtained results independently, and then combining them to minimize the researcher’s bias. Two researchers performed all tasks independently and then merged the results.
Finally, since the review process is described in detail, it allows us to ensure the review’s replicability. However, there is no guarantee that other researchers will obtain the same results (but similar ones) as presented in this work since subjectivity cannot be eliminated.
7.2 Lessons Learned
The lessons learned from this review can be organised according to two factors: those related to the performing of a systematic review and those related to the complexity issues and their possible solutions in FIS development.
Lessons learned regarding the literature review. There has been an increasing number of contributions related to the analysed topic in recent years. Researchers need to review a greater amount of papers, which may lead to general conclusions and developments in FIS area. However, the big amount of papers, heterogeneity of searching sources (i.e. inconsistent search fields, inconsistent syntax and filters, inconsistencies in exportable formats, limited number of exportable citations, inconsistencies in exportable data, etc. (Shakeel
et al.,
2018) and manual review process makes it difficult to conduct more general reviews and obtain the general conclusions. Current search engines are not designed to support systematic reviews in Computer Science, Information Systems and Software Engineering (Brereton
et al.,
2007). Therefore, we are limited to perform source-dependent searches. Moreover, from this observation we have learned that automating tools and techniques need to be applicable for the reviews.
Another lesson learned is related to the step of performing data analysis. Most systematic reviews in Computer Science, Information Systems and Software Engineering are exploratory and based on a quantitative approach. In our study, we have calculated the frequency of complexity issues in the analysed papers. Therefore, different quantitative analysis approaches, like statistical analysis, should be used for the data analysis. However, note that a sufficiently large sample needs to be studied to apply statistical methods.
Lessons learned regarding the complexity issues and their possible solutions in FIS development are mainly presented in the form of the developed framework, which allows us to systematize found complexity issues and, consequently, their solutions. Moreover, we have determined that the found complexity issues are not new in the software systems development area. In the literature, we can find a number of general solutions for a particular issue, for example, computational complexity can be reduced through reduction and approximation techniques. Therefore, the following two strategies can be chosen to solve found complexity issues in FIS development: 1) look for existing and approved solutions in the more general field, or 2) invent and approve new solutions for FIS development.
8 Conclusions
The analysis of the related works in the field of FIS development shows that there is plenty of approaches to develop FIS. Their authors point out various limitations, issues, or drawbacks of developing FIS because of its complexity. All those complexity issues complicate the automatic FIS development and need to be solved. In the literature, they are not investigated sufficiently in a systematic and comprehensive way. Consequently, there is a need to determine what complexity issues exist in FIS development and which solutions are applied to solve them.
To answer the defined research questions, we have applied SLR and SMS with a keyword map review. SLR allows us to perform an in-depth analysis. SMS with a keyword map is employed to visualize and better understand each concept’s real meaning in FIS development, systematize existing solutions of identified complexity issues, and develop the framework of complexity issues and their possible solutions in FIS development.
The obtained analysis results of RQ1 show four main complexity issues: 1) computational complexity, 2) complexity of fuzzy rules, 3) complexity of MF development, and 4) data complexity. The occurrence of those complexity issues in the analysed papers are the following: the complexity of fuzzy rules have the highest occurrence, the complexity of MF development – high occurrence, computational complexity – moderate occurrence, and data complexity – low occurrence. The obtained analysis results of RQ2 show that the same solutions, like genetic algorithms, neural networks, approximation and optimization techniques, etc., are used to solve different complexity issues.
Based on the found answers to the defined research questions, we have proposed a framework of complexity issues and their possible solutions in FIS development. It allows a better understanding of FIS development complexity issues and their possible solutions for the developers. It encourages further research directions on a more effective and efficient FIS development by researchers.
In future research, we are going to do the following:
-
1) extend our review to several search sources;
-
2) extend our research to examine the synergy effect in FIS;
-
3) apply other techniques, like clustering analysis, etc., for co-relationship analysis of complexity issues and their solutions; and
-
4) analyse FIS development and deep learning possibilities.