Informatica logo


Login Register

  1. Home
  2. Issues
  3. Volume 31, Issue 4 (2020)
  4. MACONT: Mixed Aggregation by Comprehensi ...

Informatica

Information Submit your article For Referees Help ATTENTION!
  • Article info
  • Full article
  • Cited by
  • More
    Article info Full article Cited by

MACONT: Mixed Aggregation by Comprehensive Normalization Technique for Multi-Criteria Analysis
Volume 31, Issue 4 (2020), pp. 857–880
Zhi Wen   Huchang Liao   Edmundas Kazimieras Zavadskas  

Authors

 
Placeholder
https://doi.org/10.15388/20-INFOR417
Pub. online: 8 June 2020      Type: Research Article      Open accessOpen Access

Received
1 December 2019
Accepted
1 April 2020
Published
8 June 2020

Abstract

Normalization and aggregation are two most important issues in multi-criteria analysis. Although various multi-criteria decision-making (MCDM) methods have been developed over the past several decades, few of them integrate multiple normalization techniques and mixed aggregation approaches at the same time to reduce the deviations of evaluation values and enhance the reliability of the final decision result. This study is dedicated to introducing a new MCDM method called Mixed Aggregation by COmprehensive Normalization Technique (MACONT) to tackle complicate MCDM problems. This method introduces a comprehensive normalization technique based on criterion types, and then uses two mixed aggregation operators to aggregate the distance values between each alternative and the reference alternative on different criteria from the perspectives of compensation and non-compensation. An illustrative example is given to show the applicability of the proposed method, and the advantages of the proposed method are highlighted through sensitivity analyses and comparative analyses.

1 Introduction

Decision making is a frequent activity in management. It is a process of analysis and judgment in which an optimal alternative is selected from several alternatives to achieve a certain target. For a decision-making problem, alternatives and criteria used to evaluate the performance of alternatives are two essential elements. However, in many practical decision-making problems, it is difficult or unrealistic for decision-makers to establish a criterion to cover all aspects of the problem and capture the best alternative by evaluating alternatives under the criterion. It is common to portray the performance of alternatives in complex environments by multiple criteria with different dimensions and potentially conflicting to rank alternatives and then select the optimal alternative. This enables various multi-criteria decision-making (MCDM) methods being developed to solve complicated decision-making problems (Alinezhad and Khalili, 2019; Liao et al., 2020, 2018; Zavadskas et al., 2014). For example, Kou et al. (2012) employed the TOPSIS, ELECTRE, GRA, VIKOR, and PROMETHEE methods (the explanations of all abbreviations used in this paper can be found in Table A.1 in Appendix A) for classification algorithm selection; Liao et al. (2019) integrated the BWM and ARAS methods for digital supply chain finance supplier selection; Kou et al. (2020) applied the TOPSIS, VIKOR, GRA, WSM, and PROMETHEE methods to evaluate feature selection methods for text classification with small datasets.
From the perspective of obtaining the final ranking of alternatives, the existing MCDM methods can be divided into two categories: one is based on the pairwise comparisons between alternatives, such as the AHP, ANP, TODIM, PROMETHEE, EXPROM, ELECTRE, and GLDS methods (Wu and Liao, 2019); the other is based on the utility values of alternatives, such as the TOPSIS, VIKOR, ARAS, WASPAS, MULTIMOORA methods (Wu and Liao, 2019). For the latter category of MCDM methods, the following stages are included: 1) establishing a decision matrix, 2) normalizing the decision matrix, 3) aggregating the performance of alternatives under all criteria, and 4) determining the ranking of alternatives and the optimal alternative. In this sense, the main reason why different methods may produce different decision-making results lies in the differences of normalization techniques and aggregation functions used in these methods.
Generally, the performance of alternatives under different criteria are measured by different units, and all elements in a decision matrix must be dimensionless to make an effective comparison. Linear normalization, as a normalization technique widely used in MCDM methods, has three main forms, i.e. the linear sum-based normalization, linear ratio-based normalization, and linear max-min normalization (Jahan and Edwards, 2015). Each of these normalization techniques has its own emphasis: the linear sum-based normalization technique emphasizes the proportion of the performance of an alternative in the sum of the performance of all alternatives under a criterion; the linear ratio-based normalization technique emphasizes the ratio between the performance of an alternative and the best one under a criterion; the linear max-min normalization technique emphasizes the ratio of the difference between the performance of an alternative and the worst one and the difference between the best alternative and the worst alternative under a criterion. As we can see, most MCDM methods only use a single normalization technique, which easily makes faulty results because it cannot fully reflect the original information. In this regard, this study presents a comprehensive normalization technique which combines the aforementioned three normalization techniques to make the normalized data reflect the original data synthetically. It is worth noting that the hybrid/mixed normalization approaches used in many MCDM methods emphasize the single normalization technique of different types of criteria, while the comprehensive normalization technique proposed in this study emphasizes the integration of multiple normalization techniques of the same type of criteria. To some extent, the comprehensive normalization technique can reduce the error caused by single normalization technique to the collective results (it is illustrated by the example in Section 3). In addition, to fuse the normalized data derived by the three normalization techniques, we introduce two parameters to represent the weights of different normalized date according to the preferences of experts.
Almost all MCDM problems depend on the aggregation functions to aggregate the performance of alternatives under different criteria, and the selection of aggregation function may directly affect the decision-making results (Aggarwal, 2017). The arithmetic weighted aggregation operator and geometric weighted aggregation operator has been universally applied in many MCDM methods, such as the VIKOR, WASPAS, ARAS, and MULTIMOORA. The arithmetic weighted aggregation operator has also been used to aggregate the group opinions of decision-making problems (Zhang et al., 2019). However, these two aggregation operators lead to compensation effects among criteria. An alternative that performs well under few criteria with high weights and performs poorly under most criteria may be selected as the optimal alternative because of the compensation effect among these criteria, but due to the poor performance of this alternative under most criteria, it is not the optimal alternative expected. In response to this problem, this study fuses the performance of alternatives under different criteria by two mixed aggregation operators from the perspectives of compensation and non-compensation among criteria.
In addition, setting a reference alternative in the decision-making process can reduce the impact of the loss-aversion bias (Lahtinen et al., 2020). The reference alternative in many methods, such as the TOPSIS, VIKOR and ARAS, consists of the best performance of alternatives under each criterion, and the optimal alternative is determined according to the principle of the closest distance from the reference alternative (the TOPSIS method not only sets this reference alternative, but also sets the worst reference alternative which consists of the worst performance of alternatives under each criterion, and the optimal alternative is determined according to the principle of farthest distance from the reference alternative). However, there are few methods using the average performance of alternatives under each criterion as the reference alternative, which determines the optimal alternative according to the principle of the longest positive distance from the reference alternative and the shortest negative distance from the reference alternative. Inspired by this idea, before the aggregation process, we set a virtual reference alternative which consists of the average performance of alternatives under each criterion. Such a reference alternative can comprehensively consider the good performance and bad performance of an alternative compared with other alternatives.
To sum up, this study is devoted to the following innovations:
  • 1. Present a comprehensive normalization method which combines three linear normalization techniques based on the criterion types to reduce the deviations produced in the normalization process.
  • 2. Set a virtual reference alternative which consists of the average performance of alternatives on each criterion to simultaneously consider the good performance and bad performance of an alternative compared with other alternatives.
  • 3. Introduce two mixed aggregation operators from the perspectives of compensation and non-compensation among criteria to aggregate the distance value between each alternative and the reference alternative under each criterion, which can obtain multi-aspect and reliable ranking results of alternatives.
  • 4. Propose the detailed operational procedure of the MACONT method, and apply this method to solve a selection problem of sustainable third-party reverse logistics providers.
The framework of this study is divided into the following parts: Section 2 reviews the normalization techniques and aggregation functions used in various MCDM methods. Section 3 proposes the mixed aggregation by comprehensive normalization technique (MACONT) method. Section 4 gives an illustrative example to demonstrate the applicability of the proposed method. Section 5 provides some sensitivity analyses and comparative analyses to highlight the advantages of the proposed method. The conclusion is drawn in Section 6.

2 Literature Review

In the section, we review the normalization techniques and aggregation approaches used in various MCDM methods.

2.1 Review of Normalization Techniques

In many MCDM problems, different criteria usually differ in dimension and magnitude (Chen, 2019). To compare alternatives effectively, the original data under different evaluation criteria need to be transformed into dimensionless form by various normalization techniques (Jahan and Edwards, 2015). The vector normalization technique and linear normalization technique are two commonly used normalization techniques in many MCDM methods.
The MCDM methods using the vector normalization technique include TOPSIS (Hwang and Yoon, 1981), MOORA (Brauers and Zavadskas, 2009), MULTIMOORA (Brauers and Zavadskas, 2010) and ELECTRE (Roy, 1991; Govindan and Jepsen, 2016). Opricovic and Tzeng (2004) pointed out that the normalized data computed by the vector normalization technique relies on the evaluation unit of a criterion, and the normalized data of different evaluation units for a criterion may be different. Regarding the MCDM methods using the linear normalization technique, the COPRAS (Zolfani and Bahrami, 2014), ARAS (Zavadskas and Turskis, 2010), ANP (Jharkharia and Shankar, 2007), IDOCRIW (Zavadskas and Podvezko, 2016) and TODIM (Gomes, 2009) methods apply the linear sum-based normalization technique, the WASPAS (Zavadskas et al., 2012) and EDAS (Keshavarz Ghorabaee et al., 2015) methods exploit the linear ratio-based normalization technique, and the VIKOR (Opricovic and Tzeng, 2007), MABAC (Pamucar and Cirovic, 2015), MACBETH (Bana e Costa and Chagas, 2004), MAUT (Emovon et al., 2016), CRITIC (Diakoulaki et al., 1995), KEMIRA (Krylovas et al., 2014) and CoCoSo (Yazdani et al., 2019) methods employ the linear max-min normalization technique. However, these methods only use a single normalization technique, which easily leads to deviations between the normalized data and original data. To ameliorate this problem, Liao and Wu (2020) present the DNMA method which is an MCDM method combining the target-based vector normalization technique and target-based linear normalization technique. Nevertheless, such a double normalization technique does not normalize the original data in accordance with the different types of criteria. Hence, this study proposes a comprehensive normalization technique based on the criterion types to reduce the deviations produced in the normalization process.

2.2 Review of Aggregation Functions

Table 1
The normalization technique and aggregation operator in various MCDM methods.
MCDM method Normalization technique Aggregation operator
Vector Linear sum-based Linear ratio-based Linear max-min Arithmetic weighted Geometric weighted Weighted maximum Weighted minimum
TOPSIS ✓ ✓
ARAS ✓ ✓
COPRAS ✓ ✓
MACBETH ✓ ✓
MAUT ✓ ✓
EDAS ✓ ✓
VIKOR ✓ ✓ ✓
MULTIMOORA ✓ ✓ ✓ ✓
WASPAS ✓ ✓ ✓
CoCoSo ✓ ✓ ✓
The proposed method ✓ ✓ ✓ ✓ ✓ ✓ ✓
Aggregation operators are the basis of information fusion, which are used to combine multiple values into a collective one (Blanco-Mesa et al., 2019; Mi et al., 2020). In many MCDM methods, the arithmetic weighted aggregation operator has been frequently used. The TOPSIS method (Hwang and Yoon, 1981) uses the arithmetic weighted aggregation operator to calculate the distances of alternatives from the positive ideal solution and negative ideal solution. The ARAS method (Zavadskas and Turskis, 2010) attains the optimality function value by the arithmetic weighted aggregation operator. In the COPRAS method (Zolfani and Bahrami, 2014), the arithmetic weighted aggregation operator is used to obtain the maximizing and minimizing indexes separately according to different types of criteria. The MACBETH method (Bana e Costa and Chagas, 2004) employs the arithmetic weighted aggregation operator to calculate the overall score. The MAUT method (Emovon et al., 2016) applies the arithmetic weighted aggregation operator to compute the final utility score. The EDAS method (Keshavarz Ghorabaee et al., 2015) exploits the arithmetic weighted aggregation operator to respectively aggregate the positive distances from average and negative distances from average. The VIKOR method (Opricovic and Tzeng, 2007) fuses the arithmetic weighted aggregation operator and weighted maximum formula to derive a “group utility” value and an “individual regret” value. Based on different criterion types, the MULTIMOORA method (Brauers and Zavadskas, 2010) synthesizes the arithmetic weighted aggregation operator, weighted maximum formula and geometric weighted aggregation operator to get three subordinate utility values. The WASPAS method (Zavadskas et al., 2012) combines the arithmetic weighted aggregation operator and geometric weighted aggregation operator to deduce the joint generalized criterion value. The CoCoSo method (Yazdani et al., 2019) performs the aggregation process according to the attitudes of additive and multiplicative aggregations in the WASPAS method.
From Table 1, we can find that many of the above methods aggregate the performance values of alternatives, but few of them aggregate the distance values between each alternative and the reference alternative by multiple aggregation operators. Hence, this study introduces two mixed aggregation operators to aggregate the distance value between each alternative and the reference alternative under each criterion.

3 The Mixed Aggregation by Comprehensive Normalization Technique (MACONT) Method

In this section, a new MCDM method called the Mixed Aggregation by COmprehensive Normalization Technique (MACONT) is presented. The main idea of this method is as follows: 1) normalize the performance values of alternatives over criteria by three normalization techniques; 2) synthesize the three normalized performance values; 3) set a virtual reference alternative; 4) combining the weights of criteria, use two mixed aggregation operators to integrate the distances between each alternative and the reference alternative; 5) based on integration of the subordinate comprehensive scores derived by two mixed aggregation operators, calculate the final comprehensive scores of alternatives, and then rank the alternatives according to the final comprehensive scores.
The specific implementation minds of this method in solving MCDM problems are as follows:
Firstly, for an MCDM problem, it is essential to establish a series of alternatives (${a_{1}},{a_{2}},\dots ,{a_{i}},\dots ,{a_{m}}$) and criteria (${c_{1}},{c_{2}},\dots ,{c_{j}},\dots ,{c_{n}}$) in advance. One or more experts are invited to provide the evaluation information for the performance of the alternatives over the criteria. According to the evaluation information, a decision matrix can be formed (if multiple experts are invited, the evaluation information provided by each expert can be integrated into a decision matrix by combining the weights of experts) as follows:
\[ \left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c}{x_{11}}\hspace{1em}& {x_{12}}\hspace{1em}& \cdots \hspace{1em}& {x_{1j}}\hspace{1em}& \cdots \hspace{1em}& {x_{1n}}\\ {} {x_{21}}\hspace{1em}& {x_{22}}\hspace{1em}& \cdots \hspace{1em}& {x_{2j}}\hspace{1em}& \cdots \hspace{1em}& {x_{2n}}\\ {} \vdots \hspace{1em}& \vdots \hspace{1em}& \ddots \hspace{1em}& \vdots \hspace{1em}& \ddots \hspace{1em}& \vdots \\ {} {x_{i1}}\hspace{1em}& {x_{i2}}\hspace{1em}& \cdots \hspace{1em}& {x_{ij}}\hspace{1em}& \cdots \hspace{1em}& {x_{in}}\\ {} \vdots \hspace{1em}& \vdots \hspace{1em}& \ddots \hspace{1em}& \vdots \hspace{1em}& \ddots \hspace{1em}& \vdots \\ {} {x_{m1}}\hspace{1em}& {x_{m2}}\hspace{1em}& \cdots \hspace{1em}& {x_{mj}}\hspace{1em}& \cdots \hspace{1em}& {x_{mn}}\end{array}\right],\]
where ${x_{ij}}$ represents the performance value of the ith alternative under the jth criterion, and $i=1,2,\dots ,m$, $j=1,2,\dots ,n$.
Then, normalize the decision matrix respectively by three normalization techniques. The first normalization technique is the linear sum-based normalization technique, as shown in Eq. (1), and the normalized value is represented by ${\hat{x}_{ij}^{1}}$. The second normalization technique is the linear ratio-based normalization technique, as shown in Eq. (2), and the normalized value is represented by ${\hat{x}_{ij}^{2}}$. The third normalization technique is the linear max-min normalization technique, as shown in Eq. (3), and the normalized value is represented by ${\hat{x}_{ij}^{3}}$. From the first normalization technique to the third normalization technique, the gap among the normalized performance values of alternatives under criteria is growing.
(1)
\[\begin{aligned}{}& \left\{\begin{array}{l@{\hskip4.0pt}l}{\hat{x}_{ij}^{1}}={x_{ij}}\big/{\textstyle\textstyle\sum _{i=1}^{m}}{x_{ij}},\hspace{1em}& \text{for benefit criteria},\\ {} {\hat{x}_{ij}^{1}}=\frac{1}{{x_{ij}}}\big/{\textstyle\textstyle\sum _{i=1}^{m}}\frac{1}{{x_{ij}}},\hspace{1em}& \text{for cost criteria},\end{array}\right.\end{aligned}\]
(2)
\[\begin{aligned}{}& \left\{\begin{array}{l@{\hskip4.0pt}l}{\hat{x}_{ij}^{2}}={x_{ij}}/{\max _{i}}{x_{ij}},\hspace{1em}& \text{for benefit criteria},\\ {} {\hat{x}_{ij}^{2}}={\min _{i}}{x_{ij}}/{x_{ij}},\hspace{1em}& \text{for cost criteria},\end{array}\right.\end{aligned}\]
(3)
\[\begin{aligned}{}& \left\{\begin{array}{l@{\hskip4.0pt}l}{\hat{x}_{ij}^{3}}=({x_{ij}}-{\min _{i}}{x_{ij}})/({\max _{i}}{x_{ij}}-{\min _{i}}{x_{ij}}),\hspace{1em}& \text{for benefit criteria},\\ {} {\hat{x}_{ij}^{3}}=({x_{ij}}-{\max _{i}}{x_{ij}})/({\min _{i}}{x_{ij}}-{\max _{i}}{x_{ij}}),\hspace{1em}& \text{for cost criteria}.\end{array}\right.\end{aligned}\]
After the three kinds of normalized performance values of alternatives over criteria are obtained, to make the decision-making process flexible, two balance parameters, λ and μ, are introduced to integrate these normalized performance values, and the integration equation is as follows:
(4)
\[ {\hat{x}_{ij}}=\lambda {\hat{x}_{ij}^{1}}+\mu {\hat{x}_{ij}^{2}}+(1-\lambda -\mu ){\hat{x}_{ij}^{3}},\]
where $0\leqslant \lambda $, $\mu \leqslant 1$, and the values of these two balance parameters are determined by experts. If the experts pay more attention to the performance of an alternative in all alternatives, then λ is assigned a larger value; if the experts want to highlight the best performance of alternatives, then μ is assigned a larger value; if the experts emphasize a large gap between alternatives, that is, they highlight the best performance of alternatives but do not ignore the worst performance of alternatives, then λ and μ are assigned smaller values.
To illustrate the function of the comprehensive normalization technique in reducing deviations, we give an example here.
Example 1.
Suppose that there are three alternatives (${a_{1}}$, ${a_{2}}$, ${a_{3}}$) and three criteria (${c_{1}}$, ${c_{2}}$, ${c_{3}}$). ${c_{1}}$ and ${c_{2}}$ are benefit criteria and ${c_{3}}$ is a cost criterion. The decision matrix is given as:
\[ \left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c}1\hspace{1em}& 3.5\hspace{1em}& 8\\ {} 3\hspace{1em}& 4\hspace{1em}& 37\\ {} 5\hspace{1em}& 2.5\hspace{1em}& 46\end{array}\right].\]
By Eqs. (1)–(3), we can get three normalized matrices as:
\[ \left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c}0.111\hspace{1em}& 0.35\hspace{1em}& 0.719\\ {} 0.333\hspace{1em}& 0.4\hspace{1em}& 0.155\\ {} 0.556\hspace{1em}& 0.25\hspace{1em}& 0.125\end{array}\right],\hspace{2em}\left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c}0.2\hspace{1em}& 0.875\hspace{1em}& 1\\ {} 0.6\hspace{1em}& 1\hspace{1em}& 0.216\\ {} 1\hspace{1em}& 0.625\hspace{1em}& 0.174\end{array}\right],\hspace{2em}\left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c}0\hspace{1em}& 0.667\hspace{1em}& 1\\ {} 0.5\hspace{1em}& 1\hspace{1em}& 0.237\\ {} 1\hspace{1em}& 0\hspace{1em}& 0\end{array}\right].\]
If the weights of all criteria are the same, then, based on the arithmetic weighted aggregation operator, we can obtain the ranking results of the alternatives derived from the above three decision matrices as ${a_{1}}>{a_{3}}>{a_{2}}$, ${a_{1}}>{a_{2}}>{a_{3}}$ and ${a_{2}}>{a_{1}}>{a_{3}}$, respectively. The results of the three rankings are different, which implies that using a single normalization technique is easy to deviate from the original data and lead to unreliable results. Comparatively, by Eq. (4), we can obtain a comprehensive normalized matrix as:
\[ \left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c}1\hspace{1em}& 3.5\hspace{1em}& 8\\ {} 3\hspace{1em}& 4\hspace{1em}& 37\\ {} 5\hspace{1em}& 2.5\hspace{1em}& 46\end{array}\right].\]
Let $\lambda =\mu =1/3$, the ranking results can be obtained as ${a_{1}}>{a_{2}}>{a_{3}}$, which deduces the deviation from the original data and synthesizes the ranking results of the alternatives derived from the above three decision matrices to make the results reliable.
After obtaining a normalized decision matrix, we calculate the average performance values ${\bar{x}_{j}}$ ($j=1,2,\dots ,n$) of alternatives on each criterion to form a virtual reference alternative. Then, based on the distance between each alternative and the reference alternative, two subordinate comprehensive scores of each alternative, ${S_{1}}({a_{i}})$ and ${S_{2}}({a_{i}})$, are derived by the following two mixed aggregation operators:
(5)
\[\begin{aligned}{}& {S_{1}}({a_{i}})=\delta \frac{{\rho _{i}}}{\sqrt{{\textstyle\textstyle\sum _{i=1}^{m}}{({\rho _{i}})^{2}}}}+(1-\delta )\frac{{Q_{i}}}{\sqrt{{\textstyle\textstyle\sum _{i=1}^{m}}{({Q_{i}})^{2}}}},\hspace{1em}i=1,2,\dots ,m,\end{aligned}\]
(6)
\[\begin{aligned}{}& {S_{2}}({a_{i}})=\vartheta \underset{j}{\max }\big({w_{j}}({\hat{x}_{ij}}-{\bar{x}_{j}})\big)+(1-\vartheta )\underset{j}{\min }\big({w_{j}}({\hat{x}_{ij}}-{\bar{x}_{j}})\big),\hspace{1em}i=1,2,\dots ,m,\end{aligned}\]
where ${\rho _{i}}={\textstyle\sum _{j=1}^{n}}{w_{j}}({\hat{x}_{ij}}-{\bar{x}_{j}})$, ${Q_{i}}={\textstyle\prod _{\gamma =1}^{n}}{({\bar{x}_{j}}-{\hat{x}_{ij}})^{{w_{j}}}}/{\textstyle\prod _{\eta =1}^{n}}{({\hat{x}_{ij}}-{\bar{x}_{j}})^{{w_{j}}}}$, for $i=1,2,\dots ,m$. ${w_{j}}$ ($j=1,2,\dots ,n$) represent the weights of criteria determined by experts, and ${\textstyle\sum _{j=1}^{n}}{w_{j}}=1$. γ ($\gamma =1,2,\dots ,n$) represent the part of criteria that satisfy ${\hat{x}_{ij}}<{\bar{x}_{j}}$, and η ($\eta =1,2,\dots ,n$) represent the part of criteria that satisfy ${\hat{x}_{ij}}\geqslant {\bar{x}_{j}}$. In addition, δ and ϑ ($0\leqslant \delta $, $\vartheta \leqslant 1$) are preference parameters. If the experts pay more attention to the comprehensive performance of alternatives, the high value of δ is given; if the experts pay more attention to the individual performance of alternatives, the small value of δ is given. If the experts pay more attention to the best performance of alternatives, the high value of ϑ is given; if the experts pay more attention to the worst performance of alternatives, the small value of ϑ is given.
In Eq. (5), ${\rho _{i}}$ and ${Q_{i}}$, respectively, employ the idea of arithmetic weighted aggregation operator and geometric weighted aggregation operator to aggregate the distances between each alternative and the virtual reference alternative under all criteria from the perspective of compensation effect among criteria. Moreover, inspired by the MULTIMOORA method, Eq. (6) is a combination of the best performance and the worst performance of alternatives under all criteria, which considers the non-compensation effect among criteria.
Afterwards, the final comprehensive score $S({a_{i}})$ of each alternative is computed by Eq. (7), and the final ranking of alternatives can be obtained according to the comprehensive scores in descending orders. The alternative with the highest final comprehensive score is determined as the optimal alternative
(7)
\[ S({a_{i}})=\frac{1}{2}\bigg({S_{1}}({a_{i}})+\frac{{S_{2}}({a_{i}})}{\sqrt{{\textstyle\textstyle\sum _{i=1}^{m}}{({S_{2}}({a_{i}}))^{2}}}}\bigg),\hspace{1em}i=1,2,\dots ,m.\]
It is noted that, for the accuracy and reliability of results, we need to use a normalization technique to ensure that the dimensions of the values of ${S_{1}}({a_{i}})$ and ${S_{2}}({a_{i}})$ are the same. But because the values of ${S_{1}}({a_{i}})$ and ${S_{2}}({a_{i}})$ may be negative, we adopt the vector normalization technique in Eq. (7).
In summary, the procedure of the proposed MACONT method can be summarized as below:
Step 1. Give the evaluation information of alternatives and the criteria weights, and form a decision matrix based on the evaluation information.
Step 2. Normalize the decision matrix by Eqs. (1)–(3), and use Eq. (4) to integrate the three normalized decision matrices.
Step 3. Set a virtual reference alternative by the average performance values of alternatives on each criterion, and calculate the subordinate comprehensive scores of alternatives by Eqs. (5) and (6).
Step 4. Obtain the final comprehensive scores of alternatives by Eq. (7), and determine the ranking of alternatives and the optimal alternative.

4 An Illustration Example: Sustainable Third-Party Reverse Logistics Provider Selection

Recently, the selection problem of sustainable third-party reverse logistics provider has become a hot research topic (Govindan et al., 2018; Bai and Sarkis, 2019; Zarbakhshnia et al., 2018, 2019). Company R is a multi-national professional paint manufacturing enterprise. To reduce the cost of recycling logistics and enhance the sustainable development, company R needs to choose a suitable supplier. First of all, company R selected 8 providers $({P_{1}},{P_{2}},{P_{3}},{P_{4}},{P_{5}},{P_{6}},{P_{7}},{P_{8}})$ from 26 related suppliers as candidate suppliers, and invited 6 experts with rich professional knowledge and experience to participate in the decision-making process. A series of evaluation criteria are established from three dimensions of sustainability, including:
  • • Economic dimension, such as quality, lead time, cost, delivery and services, relationship, and innovativeness;
  • • Environment dimension, such as pollution controls, resource consumption, remanufacture and reuse, green technology capability, and environmental management system;
  • • Social dimension, such as health and safety, employment stability, customer satisfaction, reputation, respect for the policy, and contractual stakeholders influence.
The details of the evaluation criteria are shown in Table 2. The weights of these criteria are determined by the experts as (0.048, 0.067, 0.085, 0.026, 0.017, 0.034, 0.098, 0.087, 0.065, 0.113, 0.046, 0.079, 0.047, 0.025, 0.072, 0.080, 0.011).
Table 2
The evaluation criteria of sustainable third-party reverse logistics providers.
Dimensions Criteria Type References
Economic ${c_{1}}$: Quality Benefit Govindan et al. (2018), Bai and Sarkis (2019), Zarbakhshnia et al. (2018, 2019)
${c_{2}}$: Lead time Cost Bai and Sarkis (2019); Zarbakhshnia et al. (2018, 2019)
${c_{3}}$: Cost Cost Govindan et al. (2018), Bai and Sarkis (2019), Zarbakhshnia et al. (2018, 2019)
${c_{4}}$: Delivery and services Benefit Zarbakhshnia et al. (2018, 2019)
${c_{5}}$: Relationship Benefit Govindan et al. (2018)
${c_{6}}$: Innovativeness Benefit Bai and Sarkis (2019)
Environment ${c_{7}}$: Pollution controls Benefit Bai and Sarkis (2019)
${c_{8}}$: Resource consumption Cost Bai and Sarkis (2019)
${c_{9}}$: Remanufacture and reuse Benefit Zarbakhshnia et al. (2018, 2019)
${c_{10}}$: Green technology capability Benefit Zarbakhshnia et al. (2018)
${c_{11}}$: Environmental management system Benefit Govindan et al. (2018)
Social ${c_{12}}$: Health and safety Benefit Bai and Sarkis (2019), Zarbakhshnia et al. (2018, 2019)
${c_{13}}$: Employment stability Benefit Zarbakhshnia et al. (2018)
${c_{14}}$: Customer satisfaction Benefit Govindan et al. (2018), Zarbakhshnia et al. (2018, 2019)
${c_{15}}$: Reputation Benefit Zarbakhshnia et al. (2019)
${c_{16}}$: Respect for the policy Benefit Zarbakhshnia et al. (2019)
${c_{17}}$: Contractual stakeholders influence Benefit Bai and Sarkis (2019)
Below we use the proposed MACONT method to solve this problem.
Step 1. The experts evaluated the providers’ performance under each criterion and established a decision matrix: infor417_g001.jpg
Step 2. We utilize Eqs. (1)–(3) to calculate three normalized decision matrices:
\[\begin{aligned}{}& \displaystyle \left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c}0.113& 0.213& 0.140& 0.087& 0.090& 0.107& 0.185& 0.121& 0.146& 0.146& 0.233& 0.063& 0.087& 0.135& 0.119& 0.094& 0.091\\ {} 0.175& 0.124& 0.082& 0.171& 0.204& 0.050& 0.043& 0.093& 0.117& 0.073& 0.067& 0.148& 0.135& 0.154& 0.143& 0.109& 0.182\\ {} 0.139& 0.157& 0.111& 0.074& 0.129& 0.174& 0.120& 0.104& 0.130& 0.122& 0.133& 0.150& 0.167& 0.138& 0.095& 0.122& 0.115\\ {} 0.098& 0.115& 0.163& 0.095& 0.111& 0.215& 0.098& 0.129& 0.051& 0.171& 0.167& 0.192& 0.094& 0.116& 0.071& 0.140& 0.099\\ {} 0.077& 0.062& 0.170& 0.115& 0.072& 0.066& 0.141& 0.209& 0.101& 0.098& 0.100& 0.106& 0.144& 0.097& 0.095& 0.149& 0.086\\ {} 0.165& 0.188& 0.087& 0.189& 0.173& 0.041& 0.087& 0.084& 0.076& 0.049& 0.133& 0.089& 0.119& 0.159& 0.167& 0.114& 0.201\\ {} 0.144& 0.069& 0.100& 0.161& 0.140& 0.190& 0.152& 0.100& 0.196& 0.195& 0.067& 0.117& 0.146& 0.142& 0.119& 0.120& 0.123\\ {} 0.088& 0.073& 0.149& 0.107& 0.080& 0.157& 0.174& 0.160& 0.184& 0.146& 0.100& 0.134& 0.108& 0.059& 0.190& 0.152& 0.104\end{array}\right],\\ {} & \displaystyle \left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c}0.647& 1.000& 0.820& 0.459& 0.443& 0.500& 1.000& 0.579& 0.742& 0.750& 1.000& 0.329& 0.521& 0.848& 0.625& 0.620& 0.453\\ {} 1.000& 0.579& 0.481& 0.905& 1.000& 0.231& 0.235& 0.446& 0.597& 0.375& 0.286& 0.768& 0.808& 0.967& 0.750& 0.717& 0.907\\ {} 0.794& 0.733& 0.653& 0.392& 0.633& 0.808& 0.647& 0.501& 0.661& 0.625& 0.571& 0.780& 1.000& 0.870& 0.500& 0.804& 0.573\\ {} 0.559& 0.537& 0.956& 0.500& 0.544& 1.000& 0.529& 0.618& 0.258& 0.875& 0.714& 1.000& 0.562& 0.728& 0.375& 0.924& 0.493\\ {} 0.441& 0.289& 1.000& 0.608& 0.354& 0.308& 0.765& 1.000& 0.516& 0.500& 0.429& 0.549& 0.863& 0.609& 0.500& 0.978& 0.427\\ {} 0.941& 0.880& 0.508& 1.000& 0.848& 0.192& 0.471& 0.405& 0.387& 0.250& 0.571& 0.463& 0.712& 1.000& 0.875& 0.750& 1.000\\ {} 0.824& 0.324& 0.586& 0.851& 0.684& 0.885& 0.824& 0.482& 1.000& 1.000& 0.286& 0.610& 0.877& 0.891& 0.625& 0.793& 0.613\\ {} 0.500& 0.344& 0.873& 0.568& 0.392& 0.731& 0.941& 0.765& 0.935& 0.750& 0.429& 0.695& 0.644& 0.370& 1.000& 1.000& 0.520\end{array}\right],\\ {} & \displaystyle \left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c}0.368& 1.000& 0.797& 0.111& 0.137& 0.381& 1.000& 0.505& 0.652& 0.667& 1.000& 0.000& 0.000& 0.759& 0.400& 0.000& 0.047\\ {} 1.000& 0.704& 0.000& 0.844& 1.000& 0.048& 0.000& 0.156& 0.457& 0.167& 0.000& 0.655& 0.600& 0.948& 0.600& 0.257& 0.837\\ {} 0.632& 0.852& 0.507& 0.000& 0.431& 0.762& 0.538& 0.322& 0.543& 0.500& 0.400& 0.673& 1.000& 0.793& 0.200& 0.486& 0.256\\ {} 0.211& 0.648& 0.958& 0.178& 0.294& 1.000& 0.385& 0.579& 0.000& 0.833& 0.600& 1.000& 0.086& 0.569& 0.000& 0.800& 0.116\\ {} 0.000& 0.000& 1.000& 0.356& 0.000& 0.143& 0.692& 1.000& 0.348& 0.333& 0.200& 0.327& 0.714& 0.379& 0.200& 0.943& 0.000\\ {} 0.895& 0.944& 0.105& 1.000& 0.765& 0.000& 0.308& 0.000& 0.174& 0.000& 0.400& 0.200& 0.400& 1.000& 0.800& 0.343& 1.000\\ {} 0.684& 0.148& 0.345& 0.756& 0.510& 0.857& 0.769& 0.268& 1.000& 1.000& 0.000& 0.418& 0.743& 0.828& 0.400& 0.457& 0.326\\ {} 0.105& 0.222& 0.866& 0.289& 0.059& 0.667& 0.923& 0.791& 0.913& 0.667& 0.200& 0.545& 0.257& 0.000& 1.000& 1.000& 0.163\end{array}\right].\end{aligned}\]
Integrate the above three normalized decision matrices by Eq. (4) to obtain a comprehensive decision matrix (here the two balance parameters are set as $\lambda =0.4$ and $\mu =0.3$):
\[ \left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c}0.350& 0.685& 0.541& 0.206& 0.210& 0.307& 0.674& 0.374& 0.476& 0.484& 0.693& 0.124& 0.191& 0.536& 0.355& 0.223& 0.186\\ {} 0.670& 0.434& 0.177& 0.593& 0.682& 0.103& 0.088& 0.218& 0.363& 0.192& 0.112& 0.486& 0.476& 0.636& 0.462& 0.336& 0.596\\ {} 0.483& 0.538& 0.392& 0.147& 0.371& 0.540& 0.403& 0.288& 0.413& 0.386& 0.345& 0.496& 0.667& 0.554& 0.248& 0.436& 0.295\\ {} 0.270& 0.401& 0.639& 0.241& 0.296& 0.686& 0.313& 0.411& 0.098& 0.581& 0.461& 0.677& 0.232& 0.436& 0.141& 0.573& 0.222\\ {} 0.163& 0.112& 0.668& 0.335& 0.135& 0.162& 0.494& 0.683& 0.300& 0.289& 0.229& 0.305& 0.531& 0.335& 0.248& 0.636& 0.162\\ {} 0.617& 0.622& 0.219& 0.676& 0.553& 0.074& 0.268& 0.155& 0.199& 0.095& 0.345& 0.235& 0.381& 0.664& 0.569& 0.373& 0.680\\ {} 0.510& 0.169& 0.319& 0.547& 0.414& 0.599& 0.539& 0.265& 0.678& 0.678& 0.112& 0.355& 0.544& 0.572& 0.355& 0.423& 0.331\\ {} 0.217& 0.199& 0.581& 0.300& 0.167& 0.482& 0.629& 0.530& 0.628& 0.484& 0.229& 0.426& 0.313& 0.134& 0.676& 0.661& 0.247\end{array}\right].\]
Step 3. Compute the average performance values of the providers on each criterion to form a virtual reference provider ${P_{0}}$, which can be identified as (0.410, 0.395, 0.442, 0.381, 0.354, 0.369, 0.426, 0.366, 0.394, 0.398 0.316, 0.388, 0.417, 0.483, 0.382, 0.458, 0.340). Calculate the subordinate comprehensive values of the providers by Eqs. (5) and (6). Without loss of generality, we let the preference parameters $\delta =0.5$ and $\vartheta =0.5$. The results are displayed in Table 3.
Step 4. Calculate the final comprehensive values of providers by Eq. (7), and rank the providers according to the descending order of the final comprehensive values. The ranking results of the providers are listed in Table 3. We can determine that the optimal provider is ${P_{8}}$.
Table 3
The ranking results of the providers derived by the proposed method.
Providers ${\rho _{i}}$ ${Q_{i}}$ ${S_{1}}({P_{i}})$ ${S_{2}}({P_{i}})$ $S({P_{i}})$ Rank
${P_{1}}$ 0.0207 1.6741 0.3074 0.0017 0.2029 4
${P_{2}}$ −0.0729 0.8401 −0.1595 0.0103 −0.3740 8
${P_{3}}$ 0.0114 0.4684 0.1071 0.0011 0.0836 6
${P_{4}}$ 0.0210 1.7719 0.3222 0.0018 0.2116 3
${P_{5}}$ −0.0141 0.6037 0.0297 0.0043 0.1382 5
${P_{6}}$ −0.0711 0.5186 −0.1968 0.0096 −0.3708 7
${P_{7}}$ 0.0362 0.5762 0.2154 0.0082 0.3422 2
${P_{8}}$ 0.0688 2.3379 0.5797 0.0040 0.4047 1

5 Sensitivity Analyses and Comparative Analyses

In this section, based on the data in Section 4, sensitivity analyses of the parameters set in the proposed method are carried out to explore the impact of the changes of parameters and criterion weights on the final ranking results of the alternatives. Moreover, other MCDM methods are applied to derive the ranking results of the alternatives, and the advantages of the proposed method are highlighted by comparing these results with that of the proposed method.

5.1 Sensitivity Analyses

(1) Sensitivity analyses on the balance parameters λ and μ.
Table 4
The ranking results derived by different values of the parameters λ and μ.
λ μ $S({P_{i}})$, $i=1,2,3,4,5,6,7,8$ Ranks
Value 0 0 $(0.1241,-0.3347,0.1257,0.2271,0.1175,-0.2730,0.3393,0.4267)$ (5, 7, 4, 3, 6, 8, 2, 1)
0 0.5 $(0.2000,-0.3591,0.0942,0.2124,0.1422,-0.3632,0.3468,0.3988)$ (4, 7, 6, 3, 5, 8, 2, 1)
0 1 $(0.1880,-0.4004,0.0632,0.1578,0.1810,-0.3333,0.3630,0.4150)$ (3, 8, 6, 5, 4, 7, 2, 1)
0.2 0.2 $(0.1760,-0.3537,0.1064,0.2179,0.1284,-0.3728,0.3419,0.4188)$ (4, 7, 6, 3, 5, 8, 2, 1)
0.2 0.6 $(0.1584,-0.3608,0.0798,0.1608,0.1638,-0.3497,0.3579,0.4231)$ (5, 8, 6, 4, 3, 7, 2, 1)
0.5 0 $(0.1552,-0.3583,0.1048,0.2210,0.1162,-0.3839,0.3354,0.4356)$ (4, 7, 6, 3, 5, 8, 2, 1)
0.5 0.5 $(0.1805,-0.4138,0.0526,0.1475,0.1702,-0.3453,0.3544,0.4283)$ (3, 8, 6, 5, 4, 7, 2, 1)
0.6 0.2 $(0.2032,-0.3861,0.0739,0.2191,0.1358,-0.3749,0.3379,0.4008)$ (4, 8, 6, 3, 5, 7, 2, 1)
1 0 $(0.1687,-0.4387,0.0327,0.1214,0.1552,-0.3570,0.3309,0.4253)$ (3, 8, 6, 5, 4, 7, 2, 1)
In the process of integrating three normalized matrices, the two balance parameters λ and μ are introduced. It can be seen from Table 4 that the rankings of providers derived by different parameter values are different, which shows that experts need to determine parameter values according to actual conditions to ensure the accuracy of the results. Moreover, in the proposed method, if only one of the three normalization techniques is used, i.e. $\lambda =1$ and $\mu =0$ or $\lambda =0$ and $\mu =1$ or $\lambda =0$ and $\mu =0$, we can find from Table 4 that the ranking result deduced by the first two normalization techniques (Eqs. (1) and (2)) is (3, 8, 6, 5, 4, 7, 2, 1), while the ranking result deduced by the third normalization technique (Eq. (3)) is (5, 7, 4, 3, 6, 8, 2, 1). Compared with the ranking result (3, 8, 6, 5, 4, 7, 2, 1) deduced by the comprehensive normalization technique in the proposed method, the comprehensive normalization technique effectively integrates three kinds of normalization techniques, and obtains a compromise ranking result.
(2) Sensitivity analysis of the preference parameter δ.
In the first mixed aggregation operator of the proposed method (i.e. Eq. (5)), the preference parameter δ is set to reasonably aggregate the comprehensive performance and individual performance of alternatives. From Table 5, it can be found that the change of this preference parameter value has little effect on the final ranking result. With the increase of the parameter value, the rank of ${P_{2}}$ rises, while the rank of ${P_{6}}$ falls, which shows that the comprehensive performance of ${P_{2}}$ is better than that of ${P_{6}}$, and the individual performance of ${P_{6}}$ is better than that of ${P_{2}}$.
(3) Sensitivity analysis of the preference parameter ϑ.
Table 5
The ranking results derived by different values of the preference parameter δ.
δ $S({P_{i}}),i=1,2,3,4,5,6,7,8$ Ranks
Value 0 $(0.2787,-0.1790,0.0943,0.2935,0.2061,-0.2013,0.3135,0.4354)$ (4, 7, 6, 3, 5, 8, 2, 1)
0.1 $(0.2635,-0.2180,0.0922,0.2771,0.1925,-0.2352,0.3192,0.4292)$ (4, 7, 6, 3, 5, 8, 2, 1)
0.2 $(0.2484,-0.2570,0.0900,0.2607,0.1789,-0.2691,0.3250,0.4231)$ (4, 7, 6, 3, 5, 8, 2, 1)
0.3 $(0.2332,-0.2960,0.0879,0.2443,0.1653,-0.3030,0.3307,0.4170)$ (4, 7, 6, 3, 5, 8, 2, 1)
0.4 $(0.2180,-0.3350,0.0857,0.2280,0.1517,-0.3369,0.3364,0.4108)$ (4, 7, 6, 3, 5, 8, 2, 1)
0.5 $(0.2029,-0.3740,0.0836,0.2116,0.1382,-0.3708,0.3422,0.4047)$ (4, 8, 6, 3, 5, 7, 2, 1)
0.6 $(0.1877,-0.4129,0.0815,0.1952,0.1246,-0.4047,0.3479,0.3986)$ (4, 8, 6, 3, 5, 7, 2, 1)
0.7 $(0.1725,-0.4519,0.0793,0.1789,0.1110,-0.4386,0.3537,0.3924)$ (4, 8, 6, 3, 5, 7, 2, 1)
0.8 $(0.1574,-0.4909,0.0772,0.1625,0.0974,-0.4725,0.3594,0.3863)$ (4, 8, 6, 3, 5, 7, 2, 1)
0.9 $(0.1422,-0.5299,0.0751,0.1461,0.0838,-0.5064,0.3652,0.3801)$ (4, 8, 6, 3, 5, 7, 2, 1)
1 $(0.1271,-0.5689,0.0729,0.1298,0.0702,-0.5403,0.3709,0.3740)$ (4, 8, 6, 3, 5, 7, 2, 1)
In the second mixed aggregation operator of the proposed method, the preference parameter ϑ is set to reasonably aggregate the best performance and the worst performance of alternatives. From Table 6, we can find that the change of this preference parameter value has a significant influence on the final ranking result. With the increase of the parameter value, the ranks of ${P_{5}}$, ${P_{6}}$, ${P_{7}}$ rise, while the ranks of ${P_{2}}$, ${P_{3}}$, ${P_{4}}$ fall, which shows that the best performance of ${P_{5}}$, ${P_{6}}$, ${P_{7}}$ is better than that of ${P_{2}}$, ${P_{3}}$, ${P_{4}}$, and the worst performance of ${P_{2}}$, ${P_{3}}$, ${P_{4}}$ is better than that of ${P_{5}}$, ${P_{6}}$, ${P_{7}}$.
Table 6
The ranking results derived by different values of the preference parameter ϑ.
ϑ $S({P_{i}})$, $i=1,2,3,4,5,6,7,8$ Ranks
Value 0 $(-0.0123,-0.3437,-0.0232,0.0074,-0.1365,-0.3720,-0.0129,0.1852)$ (3, 7, 5, 2, 6, 8, 4, 1)
0.1 $(-0.0052,-0.3578,-0.0194,0.0144,-0.1247,-0.3844,0.0058,0.1954)$ (4, 7, 5, 2, 6, 8, 3, 1)
0.2 $(0.0072,-0.3774,-0.0129,0.0264,-0.1051,-0.4013,0.0358,0.2121)$ (4, 7, 5, 3, 6, 8, 2, 1)
0.3 $(0.1156,-0.4247,0.0416,0.1297,0.0401,-0.4312,0.2319,0.3299)$ (4, 7, 5, 3, 6, 8, 2, 1)
0.4 $(0.0887,-0.4266,0.0283,0.1042,0.0069,-0.4367,0.1905,0.3038)$ (4, 7, 5, 3, 6, 8, 2, 1)
0.5 $(0.2029,-0.3740,0.0836,0.2116,0.1382,-0.3708,0.3422,0.4047)$ (4, 8, 6, 3, 5, 7, 2, 1)
0.6 $(0.3020,-0.2167,0.1294,0.3034,0.2287,-0.2078,0.4145,0.4673)$ (4, 8, 6, 3, 5, 7, 2, 1)
0.7 $(0.3367,-0.1002,0.1442,0.3346,0.2472,-0.0923,0.4067,0.4752)$ (3, 8, 6, 4, 5, 7, 2, 1)
0.8 $(0.3462,-0.0374,0.1477,0.3428,0.2459,-0.0314,0.3883,0.4705)$ (3, 8, 6, 4, 5, 7, 2, 1)
0.9 $(0.3489,-0.0016,0.1483,0.3448,0.2417,0.0030,0.3734,0.4651)$ (3, 8, 6, 4, 5, 7, 2, 1)
1 $(0.3495,0.0209,0.1482,0.3451,0.2377,0.0243,0.3624,0.4606)$ (3, 8, 6, 4, 5, 7, 2, 1)

5.2 Comparative Analyses

In this subsection, we compare the proposed method with various MCDM methods, including the TOPSIS, VIKOR, WASPAS, ARAS, and MULTIMOORA. The reason for comparison with the TOPSIS method is that both methods use the idea of reference points. The reason for comparison with the VIKOR method is that both methods use the linear max-min normalization. The reason for comparison with the WASPAS method is that both methods use the linear ratio-based normalization technique and the combination of arithmetic weighted aggregation operator and geometric weighted aggregation operator. The reason for comparison with the ARAS method is that both methods use the sum-based normalization technique and arithmetic weighted aggregation operator. The reason for comparison with the MULTIMOORA method is that both methods take into account the compensation and non-compensation effects among criteria.

5.2.1 Comparative Analysis Between the Proposed Method and the TOPSIS Method

TOPSIS method, introduced by Hwang and Yoon in 1981, deduces the optimal alternative with the shortest distance from the positive ideal solution and the farthest distance from the negative ideal solution (Opricovic and Tzeng, 2004). The procedure of the TOPSIS method is as follows. First, normalize the decision matrix by the vector normalization technique (Eq. (8)). Second, determine two ideal solutions ${P^{+}}$ and ${P^{-}}$ by Eqs. (9) and (10), respectively, and calculate the separation degrees of alternatives from two ideal solutions, ${D_{i}^{+}}$ and ${D_{i}^{-}}$, by Eqs. (11) and (12), respectively. Finally, calculate the relative closeness degrees of alternatives by Eq. (13) to attain the ranking of alternatives. The results obtained by the TOPSIS method based on the data in Section 4 are shown in Table 7.
(8)
\[\begin{aligned}{}& {\tilde{x}_{ij}}=\frac{{x_{ij}}}{\sqrt{{\textstyle\textstyle\sum _{i=1}^{m}}{({x_{ij}})^{2}}}},\end{aligned}\]
(9)
\[\begin{aligned}{}& {P^{+}}=\Big\{\Big(\underset{i}{\max }({w_{j}}{\tilde{x}_{ij}})\big|j\in g\Big),\Big(\underset{i}{\min }({w_{j}}{\tilde{x}_{ij}})\big|j\in {g^{\prime }}\Big)\hspace{0.1667em}\Big|\hspace{0.1667em}i=1,2,\dots ,m\Big\}\\ {} & \phantom{{P^{+}}}=\big\{{\tilde{x}_{1}^{+}},{\tilde{x}_{2}^{+}},\dots ,{\tilde{x}_{j}^{+}}\big\},\end{aligned}\]
(10)
\[\begin{aligned}{}& {P^{-}}=\Big\{\Big(\underset{i}{\min }({w_{j}}{\tilde{x}_{ij}})\big|j\in g\Big),\Big(\underset{i}{\max }({w_{j}}{\tilde{x}_{ij}})\big|j\in {g^{\prime }}\Big)\hspace{0.1667em}\Big|\hspace{0.1667em}i=1,2,\dots ,m\Big\}\\ {} & \phantom{{P^{-}}}=\big\{{\tilde{x}_{1}^{-}},{\tilde{x}_{2}^{-}},\dots ,{\tilde{x}_{j}^{-}}\big\},\end{aligned}\]
(11)
\[\begin{aligned}{}& {D_{i}^{+}}=\sqrt{{\sum \limits_{j=1}^{n}}{\big({w_{j}}{\tilde{x}_{ij}}-{\tilde{x}_{j}^{+}}\big)^{2}}},\hspace{1em}i=1,2,\dots ,m,\end{aligned}\]
(12)
\[\begin{aligned}{}& {D_{i}^{-}}=\sqrt{{\sum \limits_{j=1}^{n}}{\big({w_{j}}{\tilde{x}_{ij}}-{\tilde{x}_{j}^{-}}\big)^{2}}},\hspace{1em}i=1,2,\dots ,m,\end{aligned}\]
(13)
\[\begin{aligned}{}& R{C_{i}}={D_{i}^{-}}\big/\big({D_{i}^{-}}+{D_{i}^{+}}\big),\hspace{1em}i=1,2,\dots ,m,\end{aligned}\]
where ${\tilde{x}_{ij}}$ represents the normalized performance value of the ith alternative under the jth criterion. In Eqs. (9) and (10), g is associated with the benefit criteria while ${g^{\prime }}$ is associated with the cost criteria.
Table 7
The results obtained by the TOPSIS method.
Providers ${D_{i}^{+}}$ ${D_{i}^{-}}$ $R{C_{i}}$ Ranks
${P_{1}}$ 0.0439 0.0645 0.5951 2
${P_{2}}$ 0.0686 0.0374 0.3526 8
${P_{3}}$ 0.0451 0.0503 0.5274 5
${P_{4}}$ 0.0476 0.0608 0.5609 4
${P_{5}}$ 0.0574 0.0483 0.4571 6
${P_{6}}$ 0.0704 0.0388 0.3550 7
${P_{7}}$ 0.0449 0.0641 0.5877 3
${P_{8}}$ 0.0376 0.0659 0.6368 1
Comparing the ranking result of the proposed MACONT method and that of the TOPSIS method, except for ${P_{2}}$, ${P_{6}}$ and ${P_{8}}$, the ranks of other providers are different. Both methods set up a reference alternative to measure the distance between each alternative and the reference alternative. The main reason for the different results may be that the two methods adopt different normalization techniques, and the TOPSIS method needs to set up the best and worst reference alternatives to measure the distances between alternatives and the two reference alternatives, while the MACONT method only needs to set up one reference alternative to measure the good and bad performance of alternatives.

5.2.2 Comparative Analysis Between the Proposed Method and the VIKOR Method

VIKOR method, proposed by Opricovic in 1998, aims to find a compromise solution between maximum “group utility” of the “majority” and minimum “individual regret” of the “opponent” (Opricovic and Tzeng, 2007). The VIKOR method firstly normalizes each element in the decision matrix by Eq. (14), and then computes the group utility value ${K_{i}}$ and the individual regret value ${R_{i}}$ by Eqs. (15) and (16), respectively. Next, the method calculates the compromise value ${C_{i}}$ by Eq. (17). Finally, according to the ranks on ${K_{i}}$, ${R_{i}}$ and ${C_{i}}$, three ranking lists are obtained. The results deduced by the VIKOR method based on the data in Section 4 are shown in Table 8.
(14)
\[\begin{aligned}{}& \left\{\begin{array}{l@{\hskip4.0pt}l}{\hat{x}_{ij}^{\ast }}=({\max _{i}}{x_{ij}}-{x_{ij}})/({\max _{i}}{x_{ij}}-{\min _{i}}{x_{ij}}),\hspace{1em}& \text{for benefit criteria},\\ {} {\hat{x}_{ij}^{\ast }}=({\min _{i}}{x_{ij}}-{x_{ij}})/({\min _{i}}{x_{ij}}-{\max _{i}}{x_{ij}}),\hspace{1em}& \text{for cost criteria},\end{array}\right.\end{aligned}\]
(15)
\[\begin{aligned}{}& {K_{i}}={\sum \limits_{j=1}^{n}}\big({w_{j}}{\hat{x}_{ij}^{\ast }}\big),\hspace{1em}i=1,2,\dots ,m,\end{aligned}\]
(16)
\[\begin{aligned}{}& {R_{i}}=\underset{j}{\max }\big({w_{j}}{\hat{x}_{ij}^{\ast }}\big),\hspace{1em}i=1,2,\dots ,m,\end{aligned}\]
(17)
\[\begin{aligned}{}& {C_{i}}=\alpha \frac{\max {K_{i}}-{K_{i}}}{\max {K_{i}}-\min {K_{i}}}+(1-\alpha )\frac{\max {R_{i}}-{R_{i}}}{\max {R_{i}}-\min {R_{i}}},\hspace{1em}i=1,2,\dots ,m,\end{aligned}\]
where ${\hat{x}_{ij}^{\ast }}$ represents the normalized performance value of the ith alternative under the jth criterion, and α is a parameter whose value is determined by experts according to their preferences. Without loss of generality, we set $\alpha =0.5$.
Table 8
The results obtained by the VIKOR method.
Providers ${K_{i}}$ Rank ${R_{i}}$ Rank ${C_{i}}$ Ranks
${P_{1}}$ 0.4754 5 0.0800 6 0.4391 5
${P_{2}}$ 0.6256 7 0.0980 7 0.8680 7
${P_{3}}$ 0.4692 4 0.0590 2 0.2551 3
${P_{4}}$ 0.4491 3 0.0720 4 0.3240 4
${P_{5}}$ 0.5178 6 0.0753 5 0.4801 6
${P_{6}}$ 0.6304 8 0.1130 8 1.0000 8
${P_{7}}$ 0.4361 2 0.0637 3 0.2316 2
${P_{8}}$ 0.3632 1 0.0521 1 0.0000 1
Comparing the ranking result deduced by the proposed MACONT method and that obtained by the VIKOR method, except for ${P_{7}}$ and ${P_{8}}$, the ranks of other providers are different. The reasons for this phenomenon may be as follows. In terms of normalization technique, the third normalization technique used in the MACONT method (Eq. (3)) is similar to the normalization technique used in the VIKOR method (Eq. (8)), but the larger the normalized value of an alternative in the proposed method is and the smaller the normalized value of an alternative is in the VIKOR method, the better the final rank of the alternative will be. Furthermore, the VIKOR method only uses one normalization technique, while the proposed method synthesizes three normalization techniques. In terms of aggregation operator, the VIKOR method applies the arithmetic weighted aggregation operator and considers the worst performance of alternatives over all criteria, while the MACONT method applies the combination of arithmetic weighted aggregation operator and arithmetic weighted aggregation operator; that is to say, the MACONT method considers the good and bad performance of alternatives on all criteria simultaneously.

5.2.3 Comparative Analysis Between the Proposed Method and the WASPAS Method

WASPAS method, introduced by Zavadskas et al. (2012), firstly normalizes each element in the decision matrix by the linear ratio-based normalization technique (Eq. (2)), and then the normalized performance values of alternatives on all criteria are aggregated by the arithmetic weighted aggregation operator (Eq. (18)) and the geometric weighted aggregation operator (Eq. (19)). Afterwards, a parameter β (here $\beta =0.5$) is introduced to combine the values deduced by Eqs. (18) and (19). Finally, the comprehensive score of each alternative can be obtained by Eq. (20) to determine the ranking of alternatives. The results deduced by the WASPAS method based on the data in Section 4 are shown in Table 9.
(18)
\[\begin{aligned}{}& {G_{i}^{1}}={\sum \limits_{j=1}^{n}}\big({w_{j}}{\hat{x}_{ij}^{2}}\big),\hspace{1em}i=1,2,\dots ,m,\end{aligned}\]
(19)
\[\begin{aligned}{}& {G_{i}^{2}}={\prod \limits_{j=1}^{n}}{\big({\hat{x}_{ij}^{2}}\big)^{{w_{j}}}},\hspace{1em}i=1,2,\dots ,m,\end{aligned}\]
(20)
\[\begin{aligned}{}& {G_{i}}=\beta {G_{i}^{1}}+(1-\beta ){G_{i}^{2}},\hspace{1em}i=1,2,\dots ,m.\end{aligned}\]
Table 9
The results obtained by the WASPAS method.
Providers ${G_{i}^{1}}$ ${G_{i}^{2}}$ ${G_{i}}$ Ranks
${P_{1}}$ 0.7028 0.6715 0.6871 3
${P_{2}}$ 0.5764 0.5245 0.5504 8
${P_{3}}$ 0.6750 0.6619 0.6685 4
${P_{4}}$ 0.6844 0.6412 0.6628 5
${P_{5}}$ 0.6477 0.6021 0.6249 6
${P_{6}}$ 0.5844 0.5306 0.5575 7
${P_{7}}$ 0.7155 0.6762 0.6958 2
${P_{8}}$ 0.7437 0.7088 0.7262 1
Comparing the ranking result of the proposed MACONT method and that of the WASPAS method, the ranks of ${P_{1}}$, ${P_{3}}$, ${P_{4}}$ and ${P_{5}}$ are different. Although both methods use the linear ratio-based normalization technique and the combination of arithmetic weighted aggregation operator and geometric weighted aggregation operator, the WASPAS method only considers one kind of normalization technique and the aggregation operator is aimed at aggregating the performance values of alternatives, while the MACONT method synthesizes three kinds of normalization techniques and the aggregation operator is aimed at aggregating the distances between each alternative and the virtual reference alternative.

5.2.4 Comparative Analysis Between the Proposed Method and the ARAS Method

ARAS method, presented by Zavadskas and Turskis (2010), firstly sets the optimal alternative ${P^{\prime }_{0}}({x_{01}},{x_{02}},\dots ,{x_{0n}})$ as the reference alternative by Eq. (21), and then normalizes the decision matrix by the linear sum-based normalization technique (Eq. (1)). Next, the normalized performance values of alternatives on all criteria are aggregated by the arithmetic weighted aggregation operator (Eq. (22)). Afterwards, the utility degrees of alternatives can be calculated by Eq. (23) to determine the ranking of alternatives in descending order. The results deduced by the ARAS method based on the data in Section 4 are shown in Table 10.
(21)
\[\begin{aligned}{}& \left\{\begin{array}{l@{\hskip4.0pt}l}{x_{0j}}={\max _{i}}{x_{ij}},\hspace{1em}& \text{for benefit criteria},\\ {} {x_{0j}}={\min _{i}}{x_{ij}},\hspace{1em}& \text{for cost criteria},\end{array}\right.\hspace{1em}j=1,2,\dots ,n,\end{aligned}\]
(22)
\[\begin{aligned}{}& {Z_{i}}={\sum \limits_{j=1}^{n}}\big({w_{j}}{\hat{x}_{ij}^{1}}\big),\hspace{1em}i=0,1,2,\dots ,m,\end{aligned}\]
(23)
\[\begin{aligned}{}& {\textit{UD}_{i}}={Z_{i}}/{Z_{0}},\hspace{1em}i=0,1,2,\dots ,m.\end{aligned}\]
Table 10
The results obtained by the ARAS method.
Providers ${Z_{i}}$ $U{D_{i}}$ Ranks
${P^{\prime }_{0}}$ 0.1593 1.0000 –
${P_{1}}$ 0.1123 0.7054 3
${P_{2}}$ 0.0904 0.5678 8
${P_{3}}$ 0.1066 0.6694 5
${P_{4}}$ 0.1083 0.6798 4
${P_{5}}$ 0.1012 0.6356 6
${P_{6}}$ 0.0921 0.5781 7
${P_{7}}$ 0.1126 0.7070 2
${P_{8}}$ 0.1172 0.7358 1
Comparing the ranking result of the proposed MACONT method and that of the ARAS method, the ranks of ${P_{1}}$, ${P_{3}}$, ${P_{4}}$ and ${P_{5}}$ are different. Both methods use the linear sum-based normalization technique, but the MACONT method also integrates the other two normalization techniques. In terms of the aggregation methods, only the arithmetic weighted aggregation operator is used in the ARAS method, while the geometric weighted average operator is also used in the MACONT method. Furthermore, in the setting of the reference alternative, the ARAS method sets the best performance of alternatives on all criteria as the reference alternative and determines the alternative ranking according to the ratio of utility degrees of alternatives and the reference alternative, while the MACONT method sets the average performance of alternatives on all criteria as the reference alternative and determines the alternative ranking based on the distance between each alternative and the reference alternative.

5.2.5 Comparative Analysis Between the Proposed Method and the MULTIMOORA Method

MULTIMOORA method, proposed by Brauers and Zavadskas (2010), exploits three subordinate ranking methods to obtain three ranking lists based on the decision matrix which is normalized by the vector normalization technique (Eq. (8)). The first subordinate ranking method is the Ratio System, and the utility values of alternatives can be calculated by Eq. (24). The second subordinate ranking method is the Reference Point Approach, and the utility values of alternatives can be calculated by Eq. (25). The third subordinate ranking method is the Full Multiplicative Form, and the utility values of alternatives can be calculated by Eq. (26). Afterwards, this method aggregates the three subordinate ranking results based on the dominance theory (Brauers and Zavadskas, 2011) to determine the final ranking of alternatives. The results derived by the MULTIMOORA method based on the data in Section 4 are shown in Table 11.
(24)
\[\begin{aligned}{}& {Y_{i}^{1}}={\sum \limits_{j=1}^{g}}({w_{j}}{\tilde{x}_{ij}})-{\sum \limits_{j=1}^{{g^{\prime }}}}({w_{j}}{\tilde{x}_{ij}}),\hspace{1em}i=1,2,\dots ,m,\end{aligned}\]
(25)
\[\begin{aligned}{}& \left\{\begin{array}{l@{\hskip4.0pt}l}{Y_{i}^{2}}={\max _{j}}[{w_{j}}({\max _{i}}{\tilde{x}_{ij}}-{\tilde{x}_{ij}})],\hspace{1em}& \text{for benefit criteria},\\ {} {Y_{i}^{2}}={\max _{j}}[{w_{j}}({\tilde{x}_{ij}}-{\min _{i}}{\tilde{x}_{ij}})],\hspace{1em}& \text{for cost criteria},\end{array}\right.\hspace{1em}i=1,2,\dots ,m,\end{aligned}\]
(26)
\[\begin{aligned}{}& {Y_{i}^{3}}={\prod \limits_{j=1}^{g}}{({\tilde{x}_{ij}})^{{w_{j}}}}\Big/{\prod \limits_{j=1}^{{g^{\prime }}}}{({\tilde{x}_{ij}})^{{w_{j}}}},\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}i=1,2,\dots ,m.\end{aligned}\]
Table 11
The results obtained by the MULTIMOORA method.
Providers ${Y_{i}^{1}}$ Ranks ${Y_{i}^{2}}$ Ranks ${Y_{i}^{3}}$ Ranks Final ranks
${P_{1}}$ 0.1973 3 0.0276 5 0.5883 3 4
${P_{2}}$ 0.1309 7 0.0369 7 0.4595 8 7
${P_{3}}$ 0.1853 5 0.0219 1 0.5799 4 3
${P_{4}}$ 0.1909 4 0.0232 3 0.5617 5 5
${P_{5}}$ 0.1524 6 0.0292 6 0.5275 6 6
${P_{6}}$ 0.1303 8 0.0439 8 0.4648 7 8
${P_{7}}$ 0.1999 2 0.0251 4 0.5924 2 2
${P_{8}}$ 0.2150 1 0.0229 2 0.6210 1 1
Comparing the ranking result of the proposed MACONT method and that of the MULTIMOORA method, we can find that the ranks of other providers are different except for ${P_{1}}$, ${P_{7}}$ and ${P_{8}}$. Although the two methods are similar in the form of aggregation method, and both of them take into account the compensation and non-compensation effects among criteria, the two methods are quite different. On the one hand, the MULTIMOORA method only uses the vector normalization technique, while the MACONT method comprehensively uses three linear normalization techniques. On the other hand, the MULTIMOORA method divides the criteria into different types in the process of aggregation. It is easy to see that the MULTIMOORA method can only be applied to solve the MCDM problems with both cost and benefit criteria, while the MACONT method first divides the criteria types in the process of normalization, which reduces the amount of calculation to a certain extent and has a wider scope of application than the MULTIMOORA method.
The ranks of providers obtained by the proposed MACONT method and the aforementioned methods are displayed in Fig. 1. From this figure, we can find that the ranking results derived by each MCDM method are different, and the ranks result of the providers derived by the proposed MACONT method is a comprehensive solution.
infor417_g002.jpg
Fig. 1
Comparison of the MACONT method and the other MCDM methods.

6 Conclusion

This study mainly proposed an MACONT method which involves a comprehensive normalization technique based on criterion types and two mixed aggregation operators to aggregate the distance values between each alternative and the reference alternative on different criteria from the perspectives of compensation and non-compensation. To testify the applicability of the proposed method, an illustration example regarding the selection of sustainable third-party reverse logistics providers was given. Through the sensitivity analyses and comparative analyses, we highlight that the proposed MACONT method has the following advantages:
  • 1) It integrates three linear normalization techniques with respect to criterion types to make the normalized values reflect the original values synthetically, which is beneficial to reduce the deviations produced by single normalization techniques;
  • 2) It measures the good performance and bad performance of one alternative compared with other alternatives by only one reference alternative. It is easy to operate and makes the results convincing;
  • 3) It applies two mix aggregation operators to get a multi-aspect and reliable result from the perspectives of compensation and non-compensation among criteria;
  • 4) It sets some parameters, enhances the application scope of the method, and enables experts to assign values to the parameters according to actual situations of decision-making problems, and thus the results are reasonable and reliable.
In this study, there is a deficiency that we did not analyse the impact of the change of criterion weights on the final result derived by the proposed method, because the number of criteria in the illustration example is large, and it is not easy to grasp the influence of the change of criterion weights on the ranking results. In the future, we will analyse this problem. In addition, we will consider to combine the proposed method with the fuzzy set theory, extending the proposed method to intuitionistic fuzzy environment, hesitant fuzzy linguistic environment and probabilistic linguistic environment to solve complex decision-making problems in various fields.

A Appendix

Table A.1
Full names of abbreviations about MCDM methods.
Abbreviation Explanation
TOPSIS Technique for Order Preference by Similarity to Ideal Solution
ELECTRE ELimination Et Choix Traduisant la REalite, in French, ELimination and Choice Expressing the Reality
GRA Grey relational analysis
VIKOR VlseKriterijumska Optimizacija I Kompromisno Resenje
PROMETHEE Preference Ranking Organization METHod for Enrichment of Evaluations
DEA Data Envelopment Analysis
BWM Best Worst Method
ARAS Additive Ratio ASsessment
WSM Weighted Sum Method
AHP Analytical Hierarchy Process
ANP Analytic Network Process
TODIM an acronym in Portuguese of interactive and multicriteria decision making
EXPROM EXtension of the PROMethee
MULTIMOORA Multi-Objective Optimization on the basis of a Ratio Analysis plus the full MULTIplicative form
MOORA Multi-Objective Optimization Ratio Analysis
COPRAS COmplex PRoportional ASsessment
IDOCRIW Integrated Determination of Objective CRIteria Weights
EDAS Evaluation based on Distance from Average Solution
MABAC Multi-Attributive Border Approximation area Comparison
MACBETH Measuring Attractiveness by a Categorical Based Evaluation THchnique
MAUT Multi-Attribute Utility Theory
CRITIC CRiteria Importance Through Intercriteria Correlation
KEMIRA KEmeny Median Indicator Ranks Accordance
CoCoSo Combined Compromise Solution
DNMA Double Normalization-based Multiple Aggregation
WASPAS Weighted Aggregated Sum Product ASsessment
GLDS Gained and Lost Dominance Score

References

 
Aggarwal, M. (2017). Discriminative aggregation operators for multi criteria decision making. Applied Soft Computing, 52, 1058–1069. https://doi.org/10.1016/j.asoc.2016.09.025.
 
Alinezhad, A., Khalili, J. (2019). New Methods and Applications in Multiple Attribute Decision Making (MADM). International Series in Operations Research & Management Science, Vol. 277. Springer, Switzerland. https://doi.org/10.1007/978-3-030-15009-9.
 
Bai, C., Sarkis, J. (2019). Integrating and extending data and decision tools for sustainable third-party reverse logistics provider selection. Computers & Operations Research, 110, 188–207. https://doi.org/10.1016/j.cor.2018.06.005.
 
Bana e Costa, C.A., Chagas, M.P. (2004). A career choice problem: an example of how to use MACBETH to build a quantitative value model based on qualitative value judgments. European Journal of Operational Research, 153(2), 323–331. https://doi.org/10.1016/s0377-2217(03)00155-3.
 
Blanco-Mesa, F., León-Castro, E., Merigó, J.M. (2019). A bibliometric analysis of aggregation operators. Applied Soft Computing, 81, 105488. https://doi.org/10.1016/j.asoc.2019.105488.
 
Brauers, W.K.M., Zavadskas, E.K. (2009). Robustness of the multi-objective MOORA method with a test for the facilities sector. Technological and Economic Development of Economy, 15(2), 352–375. https://doi.org/10.3846/1392-8619.2009.15.352-375.
 
Brauers, W.K.M., Zavadskas, E.K. (2010). Project management by MULTIMOORA as an instrument for transition economies. Technological and Economic Development of Economy, 16(1), 5–24. https://doi.org/10.3846/tede.2010.01.
 
Brauers, W.K.M., Zavadskas, E.K. (2011). MULTIMOORA optimization used to decide on a bank loan to buy property. Technological and Economic Development of Economy, 17(1), 174–188. https://doi.org/10.3846/13928619.2011.560632.
 
Chen, P. (2019). Effects of normalization on the entropy-based TOPSIS method. Expert Systems with Applications, 136, 33–41. https://doi.org/10.1016/j.eswa.2019.06.035.
 
Diakoulaki, D., Mavrotas, G., Papayannakis, L. (1995). Determining objective weights in multiple criteria problems: the CRITIC method. Computers & Operations Research, 22(7), 763–770. https://doi.org/10.1016/0305-0548(94)00059-h.
 
Emovon, I., Norman, R.A., Murphy, A.J. (2016). Methodology of using an integrated averaging technique and MAUT method for failure mode and effects analysis. Journal of Engineering and Technology, 7(1), 140–155. http://journal.utem.edu.my/index.php/jet/article/view/777.
 
Gomes, L.F.A.M. (2009). An application of the TODIM method to the multicriteria rental evaluation of residential properties. European Journal of Operational Research, 193(1), 204–211. https://doi.org/10.1016/j.ejor.2007.10.046.
 
Govindan, K., Jepsen, M.B. (2016). ELECTRE: a comprehensive literature review on methodologies and applications. European Journal of Operational Research, 250(1), 1–29. https://doi.org/10.1016/j.ejor.2015.07.019.
 
Govindan, K., Kadziński, M., Ehling, R., Miebs, G. (2018). Selection of a sustainable third-party reverse logistics provider based on the robustness analysis of an outranking graph kernel conducted with ELECTRE I and SMAA. Omega, 85, 1–15. https://doi.org/10.1016/j.omega.2018.05.007.
 
Hwang, C.L., Yoon, K. (1981). Multiple Attribute Decision Making-Methods and Applications: A State-of-the-Art Survey. Lecture Notes in Economics and Mathematical Systems, Vol. 186. Springer-Verlag, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-48318-9.
 
Jahan, A., Edwards, K.L. (2015). A state-of-the-art survey on the influence of normalization techniques in ranking: improving the materials selection process in engineering design. Materials & Design, (1980–2015), 65, 335–342. https://doi.org/10.1016/j.matdes.2014.09.022.
 
Jharkharia, S., Shankar, R. (2007). Selection of logistics service provider: an analytic network process (ANP) approach. Omega, 35(3), 274–289. https://doi.org/10.1016/j.omega.2005.06.005.
 
Keshavarz Ghorabaee, M., Zavadskas, E.K., Olfat, L., Turskis, Z. (2015). Multi-criteria inventory classification using a new method of evaluation based on distance from average solution (EDAS). Informatica, 26, 435–451. https://doi.org/10.15388/Informatica.2015.57.
 
Kou, G., Lu, Y., Peng, Y., Shi, Y. (2012). Evaluation of classification algorithms using MCDM and rank correlation. International Journal of Information Technology & Decision Making, 11(01), 197–225. https://doi.org/10.1142/s0219622012500095.
 
Kou, G., Yang, P., Peng, Y., Xiao, F., Chen, Y., Alsaadi, F.E. (2020). Evaluation of feature selection methods for text classification with small datasets using multiple criteria decision-making methods. Applied Soft Computing, 86, 105836. https://doi.org/10.1016/j.asoc.2019.105836.
 
Krylovas, A., Zavadskas, E.K., Kosareva, N., Dadelo, S. (2014). New KEMIRA method for determining criteria priority and weights in solving MCDM problem. International Journal of Information Technology & Decision Making, 13(06), 1119–1133. https://doi.org/10.1142/s0219622014500825.
 
Lahtinen, T.J., Hämäläinen, R.P., Jenytin, C. (2020). On preference elicitation processes which mitigate the accumulation of biases in multi-criteria decision analysis. European Journal of Operational Research, 282(1), 201–210. https://doi.org/10.1016/j.ejor.2019.09.004.
 
Liao, H.C., Wu, X.L. (2020). DNMA: a double normalization-based multiple aggregation method for multi-expert multi-criteria decision making. Omega, 94, 102058. https://doi.org/10.1016/j.omega.2019.04.001.
 
Liao, H.C., Xu, Z.S., Herrera-Viedma, E., Herrera, F. (2018). Hesitant fuzzy linguistic term set and its application in decision making: a state-of-the art survey. International Journal of Fuzzy Systems, 20(7), 2084–2110. https://doi.org/10.1007/s40815-017-0432-9.
 
Liao, H.C., Wen, Z., Liu, L.L. (2019). Integrating BWM and ARAS under hesitant linguistic environment for digital supply chain finance supplier section. Technological and Economic Development of Economy, 25(6), 1188–1212. https://doi.org/10.3846/tede.2019.10716.
 
Liao, H.C., Mi, X.M., Xu, Z.S. (2020). A survey of decision-making methods with probabilistic linguistic information: bibliometrics, preliminaries, methodologies, applications and future directions. Fuzzy Optimization and Decision Making, 19(1), 81–134. https://doi.org/10.1007/s10700-019-09309-5.
 
Mi, X.M., Liao, H.C., Wu, X.L., Xu, Z.S. (2020). Probabilistic linguistic information fusion: a survey on aggregation operators in terms of principles, definitions, classifications, applications and challenges. International Journal of Intelligent Systems, 35(3), 529–556. https://doi.org/10.1002/int.22216.
 
Opricovic, S., Tzeng, G.H. (2004). Compromise solution by MCDM methods: a comparative analysis of VIKOR and TOPSIS. European Journal of Operational Research, 156(2), 445–455. https://doi.org/10.1016/S0377-2217(03)00020-1.
 
Opricovic, S., Tzeng, G.H. (2007). Extended VIKOR method in comparison with outranking methods. European Journal of Operational Research, 178(2), 514–529. https://doi.org/10.1016/j.ejor.2006.01.020.
 
Pamucar, D., Cirovic, G. (2015). The selection of transport and handling resources in logistics centers using multi-attributive border approximation area comparison (MABAC). Expert Systems with Applications, 42(6), 3016–3028. https://doi.org/10.1016/j.eswa.2014.11.057.
 
Roy, B. (1991). The outranking approach and the foundations of ELECTRE methods. Theory and Decision, 31(1), 49–73. https://doi.org/10.1007/bf00134132.
 
Wu, X.L., Liao, H.C. (2019). A consensus-based probabilistic linguistic gained and lost dominance score method. European Journal of Operational Research, 272(3), 1017–1027. https://doi.org/10.1016/j.ejor.2018.07.044.
 
Yazdani, M., Zarate, P., Zavadskas, E.K., Turskis, Z. (2019). A combined compromise solution (CoCoSo) method for multi-criteria decision-making problems. Management Decision, 57(9), 2501–2519. https://doi.org/10.1108/MD-05-2017-0458.
 
Zarbakhshnia, N., Soleimani, H., Ghaderi, H. (2018). Sustainable third-party reverse logistics provider evaluation and selection using fuzzy SWARA and developed fuzzy COPRAS in the presence of risk criteria. Applied Soft Computing, 65, 307–319. https://doi.org/10.1016/j.asoc.2018.01.023.
 
Zarbakhshnia, N., Wu, Y., Govindan, K., Soleimani, H. (2019). A novel hybrid multiple attribute decision-making approach for outsourcing sustainable reverse logistics. Journal of Cleaner Production, 242, 118461. https://doi.org/10.1016/j.jclepro.2019.118461.
 
Zavadskas, E.K., Turskis, Z. (2010). A new additive ratio assessment (ARAS) method in multicriteria decision-making. Technological and Economic Development of Economy, 16, 159–172. https://doi.org/10.3846/tede.2010.10.
 
Zavadskas, E.K., Podvezko, V. (2016). Integrated determination of objective criteria weights in MCDM. International Journal of Information Technology & Decision Making, 15(02), 267–283. https://doi.org/10.1142/s0219622016500036.
 
Zavadskas, E.K., Turskis, Z., Antucheviciene, J., Zakarevicius, A. (2012). Optimization of weighted aggregated sum product assessment. Elektronika ir elektrotechnika, 122(6), 3–6. https://doi.org/10.5755/j01.eee.122.6.1810.
 
Zavadskas, E.K., Turskis, Z., Kildienė, S. (2014). State of art surveys of overviews on MCDM/MADM methods. Technological and Economic Development of Economy, 20(1), 165–179. https://doi.org/10.3846/20294913.2014.892037.
 
Zhang, H., Kou, G., Peng, Y. (2019). Soft consensus cost models for group decision making and economic interpretations. European Journal of Operational Research, 277, 964–980. https://doi.org/10.1016/j.ejor.2019.03.009.
 
Zolfani, S.H., Bahrami, M. (2014). Investment prioritizing in high tech industries based on SWARA-COPRAS approach. Technological and Economic Development of Economy, 20(3), 534–533. https://doi.org/10.3846/20294913.2014.881435.

Biographies

Wen Zhi
wenzhi_456789@163.com

Z. Wen is a postgraduate majoring in logistics engineering from the Business School, Sichuan University, Chengdu, China. She has published several papers in high-quality international journals such as Technological and Economic Development of Economy, Journal of Civil Engineering and Management, and Economic Research-Ekonomska Istrazivanja. At present, her main research direction is multi criteria decision-making method under uncertainty environment and logistic engineering.

Liao Huchang
liaohuchang@163.com

H. Liao is a research fellow at the Business School, Sichuan University, Chengdu, China. He received his PhD degree in management science and engineering from the Shanghai Jiao Tong University, Shanghai, China, in 2015. He has published 3 monographs, 1 chapter, and more than 200 peer-reviewed papers, many in high-quality international journals including European Journal of Operational Research, Omega, IEEE Transactions on Fuzzy Systems, IEEE Transaction on Cybernetics, Information Sciences, Information Fusion, Knowledge-Based Systems, Fuzzy Sets and Systems, Expert Systems with Applications, International Journal of Production Economics, etc. He is a highly cited researcher since 2019. His current research interests include multiple criteria decision analysis under uncertainty, business intelligence and data science, cognitive computing, fuzzy set and systems, healthcare management, evidential reasoning theory with applications in big data analytics, etc. Prof. Liao is the senior member of IEEE since 2017. He is the editor-in-chief, associate editor, guest editor or editorial board member for 30 international journals, including Information Fusion (SCI), Applied Soft Computing (SCI), Technological and Economic Development of Economy (SSCI), International Journal of Strategic Property Management (SSCI), Computers and Industrial Engineering (SCI), International Journal of Fuzzy Systems (SCI), Journal of Intelligent and Fuzzy Systems (SCI) and Mathematical Problems in Engineering (SCI). Prof. Liao has received numerous honours and awards, including the thousand talents plan for young professionals in Sichuan Province, the candidate of academic and technical leaders in Sichuan Province, the outstanding scientific research achievement award in higher institutions (first class in Natural Science in 2017; second class in Natural Science in 2019), the outstanding scientific science research achievement award in Sichuan Province (second class in Social Science in 2019), and the 2015 endeavour research fellowship award granted by the Australia Government.

Zavadskas Edmundas Kazimieras
edmundas.zavadskas@vgtu.lt

E.K. Zavadskas, PhD, DSc, D.h.c. multi. prof., professor of Department of Construction Management and Real Estate, director of Institute of Sustainable Construction, Faculty of Civil Engineering, Vilnius Gediminas Technical University, Lithuania. Chief research fellow at Laboratory of Operational Research. PhD in building structures (1973). Dr Sc. (1987) in building technology and management. A member of Lithuanian and several foreign Academies of Sciences. Doctore Honoris Causa from Poznan, Saint-Petersburg and Kiev universities. The honourary international chair professor in the National Taipei University of Technology. A member of international organizations; a member of steering and programme committees at many international conferences; a member of the editorial boards of several research journals; the author and co-author of more than 400 papers and a number of monographs in Lithuanian, English, German and Russian. Founding editor of journals Technological and Economic Development of Economy and Journal of Civil Engineering and Management. Research interests: multi-criteria decision making; civil engineering, energy, sustainable development, fuzzy sets theory, fuzzy multi-criteria decision making, sustainability.


Reading mode PDF XML

Table of contents
  • 1 Introduction
  • 2 Literature Review
  • 3 The Mixed Aggregation by Comprehensive Normalization Technique (MACONT) Method
  • 4 An Illustration Example: Sustainable Third-Party Reverse Logistics Provider Selection
  • 5 Sensitivity Analyses and Comparative Analyses
  • 6 Conclusion
  • A Appendix
  • References
  • Biographies

Copyright
© 2020 Vilnius University
by logo by logo
Open access article under the CC BY license.

Keywords
multiple criteria analysis; comprehensive normalization mixed aggregation virtual reference alternative MACONT

Funding
The work was supported by the National Natural Science Foundation of China under Grant nos. 71771156, 71971145.

Metrics
since January 2020
2152

Article info
views

872

Full article
views

1300

PDF
downloads

253

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Figures
    1
  • Tables
    12
infor417_g002.jpg
Fig. 1
Comparison of the MACONT method and the other MCDM methods.
Table 1
The normalization technique and aggregation operator in various MCDM methods.
Table 2
The evaluation criteria of sustainable third-party reverse logistics providers.
Table 3
The ranking results of the providers derived by the proposed method.
Table 4
The ranking results derived by different values of the parameters λ and μ.
Table 5
The ranking results derived by different values of the preference parameter δ.
Table 6
The ranking results derived by different values of the preference parameter ϑ.
Table 7
The results obtained by the TOPSIS method.
Table 8
The results obtained by the VIKOR method.
Table 9
The results obtained by the WASPAS method.
Table 10
The results obtained by the ARAS method.
Table 11
The results obtained by the MULTIMOORA method.
Table A.1
Full names of abbreviations about MCDM methods.
infor417_g002.jpg
Fig. 1
Comparison of the MACONT method and the other MCDM methods.
Table 1
The normalization technique and aggregation operator in various MCDM methods.
MCDM method Normalization technique Aggregation operator
Vector Linear sum-based Linear ratio-based Linear max-min Arithmetic weighted Geometric weighted Weighted maximum Weighted minimum
TOPSIS ✓ ✓
ARAS ✓ ✓
COPRAS ✓ ✓
MACBETH ✓ ✓
MAUT ✓ ✓
EDAS ✓ ✓
VIKOR ✓ ✓ ✓
MULTIMOORA ✓ ✓ ✓ ✓
WASPAS ✓ ✓ ✓
CoCoSo ✓ ✓ ✓
The proposed method ✓ ✓ ✓ ✓ ✓ ✓ ✓
Table 2
The evaluation criteria of sustainable third-party reverse logistics providers.
Dimensions Criteria Type References
Economic ${c_{1}}$: Quality Benefit Govindan et al. (2018), Bai and Sarkis (2019), Zarbakhshnia et al. (2018, 2019)
${c_{2}}$: Lead time Cost Bai and Sarkis (2019); Zarbakhshnia et al. (2018, 2019)
${c_{3}}$: Cost Cost Govindan et al. (2018), Bai and Sarkis (2019), Zarbakhshnia et al. (2018, 2019)
${c_{4}}$: Delivery and services Benefit Zarbakhshnia et al. (2018, 2019)
${c_{5}}$: Relationship Benefit Govindan et al. (2018)
${c_{6}}$: Innovativeness Benefit Bai and Sarkis (2019)
Environment ${c_{7}}$: Pollution controls Benefit Bai and Sarkis (2019)
${c_{8}}$: Resource consumption Cost Bai and Sarkis (2019)
${c_{9}}$: Remanufacture and reuse Benefit Zarbakhshnia et al. (2018, 2019)
${c_{10}}$: Green technology capability Benefit Zarbakhshnia et al. (2018)
${c_{11}}$: Environmental management system Benefit Govindan et al. (2018)
Social ${c_{12}}$: Health and safety Benefit Bai and Sarkis (2019), Zarbakhshnia et al. (2018, 2019)
${c_{13}}$: Employment stability Benefit Zarbakhshnia et al. (2018)
${c_{14}}$: Customer satisfaction Benefit Govindan et al. (2018), Zarbakhshnia et al. (2018, 2019)
${c_{15}}$: Reputation Benefit Zarbakhshnia et al. (2019)
${c_{16}}$: Respect for the policy Benefit Zarbakhshnia et al. (2019)
${c_{17}}$: Contractual stakeholders influence Benefit Bai and Sarkis (2019)
Table 3
The ranking results of the providers derived by the proposed method.
Providers ${\rho _{i}}$ ${Q_{i}}$ ${S_{1}}({P_{i}})$ ${S_{2}}({P_{i}})$ $S({P_{i}})$ Rank
${P_{1}}$ 0.0207 1.6741 0.3074 0.0017 0.2029 4
${P_{2}}$ −0.0729 0.8401 −0.1595 0.0103 −0.3740 8
${P_{3}}$ 0.0114 0.4684 0.1071 0.0011 0.0836 6
${P_{4}}$ 0.0210 1.7719 0.3222 0.0018 0.2116 3
${P_{5}}$ −0.0141 0.6037 0.0297 0.0043 0.1382 5
${P_{6}}$ −0.0711 0.5186 −0.1968 0.0096 −0.3708 7
${P_{7}}$ 0.0362 0.5762 0.2154 0.0082 0.3422 2
${P_{8}}$ 0.0688 2.3379 0.5797 0.0040 0.4047 1
Table 4
The ranking results derived by different values of the parameters λ and μ.
λ μ $S({P_{i}})$, $i=1,2,3,4,5,6,7,8$ Ranks
Value 0 0 $(0.1241,-0.3347,0.1257,0.2271,0.1175,-0.2730,0.3393,0.4267)$ (5, 7, 4, 3, 6, 8, 2, 1)
0 0.5 $(0.2000,-0.3591,0.0942,0.2124,0.1422,-0.3632,0.3468,0.3988)$ (4, 7, 6, 3, 5, 8, 2, 1)
0 1 $(0.1880,-0.4004,0.0632,0.1578,0.1810,-0.3333,0.3630,0.4150)$ (3, 8, 6, 5, 4, 7, 2, 1)
0.2 0.2 $(0.1760,-0.3537,0.1064,0.2179,0.1284,-0.3728,0.3419,0.4188)$ (4, 7, 6, 3, 5, 8, 2, 1)
0.2 0.6 $(0.1584,-0.3608,0.0798,0.1608,0.1638,-0.3497,0.3579,0.4231)$ (5, 8, 6, 4, 3, 7, 2, 1)
0.5 0 $(0.1552,-0.3583,0.1048,0.2210,0.1162,-0.3839,0.3354,0.4356)$ (4, 7, 6, 3, 5, 8, 2, 1)
0.5 0.5 $(0.1805,-0.4138,0.0526,0.1475,0.1702,-0.3453,0.3544,0.4283)$ (3, 8, 6, 5, 4, 7, 2, 1)
0.6 0.2 $(0.2032,-0.3861,0.0739,0.2191,0.1358,-0.3749,0.3379,0.4008)$ (4, 8, 6, 3, 5, 7, 2, 1)
1 0 $(0.1687,-0.4387,0.0327,0.1214,0.1552,-0.3570,0.3309,0.4253)$ (3, 8, 6, 5, 4, 7, 2, 1)
Table 5
The ranking results derived by different values of the preference parameter δ.
δ $S({P_{i}}),i=1,2,3,4,5,6,7,8$ Ranks
Value 0 $(0.2787,-0.1790,0.0943,0.2935,0.2061,-0.2013,0.3135,0.4354)$ (4, 7, 6, 3, 5, 8, 2, 1)
0.1 $(0.2635,-0.2180,0.0922,0.2771,0.1925,-0.2352,0.3192,0.4292)$ (4, 7, 6, 3, 5, 8, 2, 1)
0.2 $(0.2484,-0.2570,0.0900,0.2607,0.1789,-0.2691,0.3250,0.4231)$ (4, 7, 6, 3, 5, 8, 2, 1)
0.3 $(0.2332,-0.2960,0.0879,0.2443,0.1653,-0.3030,0.3307,0.4170)$ (4, 7, 6, 3, 5, 8, 2, 1)
0.4 $(0.2180,-0.3350,0.0857,0.2280,0.1517,-0.3369,0.3364,0.4108)$ (4, 7, 6, 3, 5, 8, 2, 1)
0.5 $(0.2029,-0.3740,0.0836,0.2116,0.1382,-0.3708,0.3422,0.4047)$ (4, 8, 6, 3, 5, 7, 2, 1)
0.6 $(0.1877,-0.4129,0.0815,0.1952,0.1246,-0.4047,0.3479,0.3986)$ (4, 8, 6, 3, 5, 7, 2, 1)
0.7 $(0.1725,-0.4519,0.0793,0.1789,0.1110,-0.4386,0.3537,0.3924)$ (4, 8, 6, 3, 5, 7, 2, 1)
0.8 $(0.1574,-0.4909,0.0772,0.1625,0.0974,-0.4725,0.3594,0.3863)$ (4, 8, 6, 3, 5, 7, 2, 1)
0.9 $(0.1422,-0.5299,0.0751,0.1461,0.0838,-0.5064,0.3652,0.3801)$ (4, 8, 6, 3, 5, 7, 2, 1)
1 $(0.1271,-0.5689,0.0729,0.1298,0.0702,-0.5403,0.3709,0.3740)$ (4, 8, 6, 3, 5, 7, 2, 1)
Table 6
The ranking results derived by different values of the preference parameter ϑ.
ϑ $S({P_{i}})$, $i=1,2,3,4,5,6,7,8$ Ranks
Value 0 $(-0.0123,-0.3437,-0.0232,0.0074,-0.1365,-0.3720,-0.0129,0.1852)$ (3, 7, 5, 2, 6, 8, 4, 1)
0.1 $(-0.0052,-0.3578,-0.0194,0.0144,-0.1247,-0.3844,0.0058,0.1954)$ (4, 7, 5, 2, 6, 8, 3, 1)
0.2 $(0.0072,-0.3774,-0.0129,0.0264,-0.1051,-0.4013,0.0358,0.2121)$ (4, 7, 5, 3, 6, 8, 2, 1)
0.3 $(0.1156,-0.4247,0.0416,0.1297,0.0401,-0.4312,0.2319,0.3299)$ (4, 7, 5, 3, 6, 8, 2, 1)
0.4 $(0.0887,-0.4266,0.0283,0.1042,0.0069,-0.4367,0.1905,0.3038)$ (4, 7, 5, 3, 6, 8, 2, 1)
0.5 $(0.2029,-0.3740,0.0836,0.2116,0.1382,-0.3708,0.3422,0.4047)$ (4, 8, 6, 3, 5, 7, 2, 1)
0.6 $(0.3020,-0.2167,0.1294,0.3034,0.2287,-0.2078,0.4145,0.4673)$ (4, 8, 6, 3, 5, 7, 2, 1)
0.7 $(0.3367,-0.1002,0.1442,0.3346,0.2472,-0.0923,0.4067,0.4752)$ (3, 8, 6, 4, 5, 7, 2, 1)
0.8 $(0.3462,-0.0374,0.1477,0.3428,0.2459,-0.0314,0.3883,0.4705)$ (3, 8, 6, 4, 5, 7, 2, 1)
0.9 $(0.3489,-0.0016,0.1483,0.3448,0.2417,0.0030,0.3734,0.4651)$ (3, 8, 6, 4, 5, 7, 2, 1)
1 $(0.3495,0.0209,0.1482,0.3451,0.2377,0.0243,0.3624,0.4606)$ (3, 8, 6, 4, 5, 7, 2, 1)
Table 7
The results obtained by the TOPSIS method.
Providers ${D_{i}^{+}}$ ${D_{i}^{-}}$ $R{C_{i}}$ Ranks
${P_{1}}$ 0.0439 0.0645 0.5951 2
${P_{2}}$ 0.0686 0.0374 0.3526 8
${P_{3}}$ 0.0451 0.0503 0.5274 5
${P_{4}}$ 0.0476 0.0608 0.5609 4
${P_{5}}$ 0.0574 0.0483 0.4571 6
${P_{6}}$ 0.0704 0.0388 0.3550 7
${P_{7}}$ 0.0449 0.0641 0.5877 3
${P_{8}}$ 0.0376 0.0659 0.6368 1
Table 8
The results obtained by the VIKOR method.
Providers ${K_{i}}$ Rank ${R_{i}}$ Rank ${C_{i}}$ Ranks
${P_{1}}$ 0.4754 5 0.0800 6 0.4391 5
${P_{2}}$ 0.6256 7 0.0980 7 0.8680 7
${P_{3}}$ 0.4692 4 0.0590 2 0.2551 3
${P_{4}}$ 0.4491 3 0.0720 4 0.3240 4
${P_{5}}$ 0.5178 6 0.0753 5 0.4801 6
${P_{6}}$ 0.6304 8 0.1130 8 1.0000 8
${P_{7}}$ 0.4361 2 0.0637 3 0.2316 2
${P_{8}}$ 0.3632 1 0.0521 1 0.0000 1
Table 9
The results obtained by the WASPAS method.
Providers ${G_{i}^{1}}$ ${G_{i}^{2}}$ ${G_{i}}$ Ranks
${P_{1}}$ 0.7028 0.6715 0.6871 3
${P_{2}}$ 0.5764 0.5245 0.5504 8
${P_{3}}$ 0.6750 0.6619 0.6685 4
${P_{4}}$ 0.6844 0.6412 0.6628 5
${P_{5}}$ 0.6477 0.6021 0.6249 6
${P_{6}}$ 0.5844 0.5306 0.5575 7
${P_{7}}$ 0.7155 0.6762 0.6958 2
${P_{8}}$ 0.7437 0.7088 0.7262 1
Table 10
The results obtained by the ARAS method.
Providers ${Z_{i}}$ $U{D_{i}}$ Ranks
${P^{\prime }_{0}}$ 0.1593 1.0000 –
${P_{1}}$ 0.1123 0.7054 3
${P_{2}}$ 0.0904 0.5678 8
${P_{3}}$ 0.1066 0.6694 5
${P_{4}}$ 0.1083 0.6798 4
${P_{5}}$ 0.1012 0.6356 6
${P_{6}}$ 0.0921 0.5781 7
${P_{7}}$ 0.1126 0.7070 2
${P_{8}}$ 0.1172 0.7358 1
Table 11
The results obtained by the MULTIMOORA method.
Providers ${Y_{i}^{1}}$ Ranks ${Y_{i}^{2}}$ Ranks ${Y_{i}^{3}}$ Ranks Final ranks
${P_{1}}$ 0.1973 3 0.0276 5 0.5883 3 4
${P_{2}}$ 0.1309 7 0.0369 7 0.4595 8 7
${P_{3}}$ 0.1853 5 0.0219 1 0.5799 4 3
${P_{4}}$ 0.1909 4 0.0232 3 0.5617 5 5
${P_{5}}$ 0.1524 6 0.0292 6 0.5275 6 6
${P_{6}}$ 0.1303 8 0.0439 8 0.4648 7 8
${P_{7}}$ 0.1999 2 0.0251 4 0.5924 2 2
${P_{8}}$ 0.2150 1 0.0229 2 0.6210 1 1
Table A.1
Full names of abbreviations about MCDM methods.
Abbreviation Explanation
TOPSIS Technique for Order Preference by Similarity to Ideal Solution
ELECTRE ELimination Et Choix Traduisant la REalite, in French, ELimination and Choice Expressing the Reality
GRA Grey relational analysis
VIKOR VlseKriterijumska Optimizacija I Kompromisno Resenje
PROMETHEE Preference Ranking Organization METHod for Enrichment of Evaluations
DEA Data Envelopment Analysis
BWM Best Worst Method
ARAS Additive Ratio ASsessment
WSM Weighted Sum Method
AHP Analytical Hierarchy Process
ANP Analytic Network Process
TODIM an acronym in Portuguese of interactive and multicriteria decision making
EXPROM EXtension of the PROMethee
MULTIMOORA Multi-Objective Optimization on the basis of a Ratio Analysis plus the full MULTIplicative form
MOORA Multi-Objective Optimization Ratio Analysis
COPRAS COmplex PRoportional ASsessment
IDOCRIW Integrated Determination of Objective CRIteria Weights
EDAS Evaluation based on Distance from Average Solution
MABAC Multi-Attributive Border Approximation area Comparison
MACBETH Measuring Attractiveness by a Categorical Based Evaluation THchnique
MAUT Multi-Attribute Utility Theory
CRITIC CRiteria Importance Through Intercriteria Correlation
KEMIRA KEmeny Median Indicator Ranks Accordance
CoCoSo Combined Compromise Solution
DNMA Double Normalization-based Multiple Aggregation
WASPAS Weighted Aggregated Sum Product ASsessment
GLDS Gained and Lost Dominance Score

INFORMATICA

  • Online ISSN: 1822-8844
  • Print ISSN: 0868-4952
  • Copyright © 2023 Vilnius University

About

  • About journal

For contributors

  • OA Policy
  • Submit your article
  • Instructions for Referees
    •  

    •  

Contact us

  • Institute of Data Science and Digital Technologies
  • Vilnius University

    Akademijos St. 4

    08412 Vilnius, Lithuania

    Phone: (+370 5) 2109 338

    E-mail: informatica@mii.vu.lt

    https://informatica.vu.lt/journal/INFORMATICA
Powered by PubliMill  •  Privacy policy