1 Introduction
Decision making is a frequent activity in management. It is a process of analysis and judgment in which an optimal alternative is selected from several alternatives to achieve a certain target. For a decision-making problem, alternatives and criteria used to evaluate the performance of alternatives are two essential elements. However, in many practical decision-making problems, it is difficult or unrealistic for decision-makers to establish a criterion to cover all aspects of the problem and capture the best alternative by evaluating alternatives under the criterion. It is common to portray the performance of alternatives in complex environments by multiple criteria with different dimensions and potentially conflicting to rank alternatives and then select the optimal alternative. This enables various multi-criteria decision-making (MCDM) methods being developed to solve complicated decision-making problems (Alinezhad and Khalili,
2019; Liao
et al.,
2020,
2018; Zavadskas
et al.,
2014). For example, Kou
et al. (
2012) employed the TOPSIS, ELECTRE, GRA, VIKOR, and PROMETHEE methods (the explanations of all abbreviations used in this paper can be found in Table
A.1 in Appendix
A) for classification algorithm selection; Liao
et al. (
2019) integrated the BWM and ARAS methods for digital supply chain finance supplier selection; Kou
et al. (
2020) applied the TOPSIS, VIKOR, GRA, WSM, and PROMETHEE methods to evaluate feature selection methods for text classification with small datasets.
From the perspective of obtaining the final ranking of alternatives, the existing MCDM methods can be divided into two categories: one is based on the pairwise comparisons between alternatives, such as the AHP, ANP, TODIM, PROMETHEE, EXPROM, ELECTRE, and GLDS methods (Wu and Liao,
2019); the other is based on the utility values of alternatives, such as the TOPSIS, VIKOR, ARAS, WASPAS, MULTIMOORA methods (Wu and Liao,
2019). For the latter category of MCDM methods, the following stages are included: 1) establishing a decision matrix, 2) normalizing the decision matrix, 3) aggregating the performance of alternatives under all criteria, and 4) determining the ranking of alternatives and the optimal alternative. In this sense, the main reason why different methods may produce different decision-making results lies in the differences of normalization techniques and aggregation functions used in these methods.
Generally, the performance of alternatives under different criteria are measured by different units, and all elements in a decision matrix must be dimensionless to make an effective comparison. Linear normalization, as a normalization technique widely used in MCDM methods, has three main forms, i.e. the linear sum-based normalization, linear ratio-based normalization, and linear max-min normalization (Jahan and Edwards,
2015). Each of these normalization techniques has its own emphasis: the linear sum-based normalization technique emphasizes the proportion of the performance of an alternative in the sum of the performance of all alternatives under a criterion; the linear ratio-based normalization technique emphasizes the ratio between the performance of an alternative and the best one under a criterion; the linear max-min normalization technique emphasizes the ratio of the difference between the performance of an alternative and the worst one and the difference between the best alternative and the worst alternative under a criterion. As we can see, most MCDM methods only use a single normalization technique, which easily makes faulty results because it cannot fully reflect the original information. In this regard, this study presents a comprehensive normalization technique which combines the aforementioned three normalization techniques to make the normalized data reflect the original data synthetically. It is worth noting that the hybrid/mixed normalization approaches used in many MCDM methods emphasize the single normalization technique of different types of criteria, while the comprehensive normalization technique proposed in this study emphasizes the integration of multiple normalization techniques of the same type of criteria. To some extent, the comprehensive normalization technique can reduce the error caused by single normalization technique to the collective results (it is illustrated by the example in Section
3). In addition, to fuse the normalized data derived by the three normalization techniques, we introduce two parameters to represent the weights of different normalized date according to the preferences of experts.
Almost all MCDM problems depend on the aggregation functions to aggregate the performance of alternatives under different criteria, and the selection of aggregation function may directly affect the decision-making results (Aggarwal,
2017). The arithmetic weighted aggregation operator and geometric weighted aggregation operator has been universally applied in many MCDM methods, such as the VIKOR, WASPAS, ARAS, and MULTIMOORA. The arithmetic weighted aggregation operator has also been used to aggregate the group opinions of decision-making problems (Zhang
et al.,
2019). However, these two aggregation operators lead to compensation effects among criteria. An alternative that performs well under few criteria with high weights and performs poorly under most criteria may be selected as the optimal alternative because of the compensation effect among these criteria, but due to the poor performance of this alternative under most criteria, it is not the optimal alternative expected. In response to this problem, this study fuses the performance of alternatives under different criteria by two mixed aggregation operators from the perspectives of compensation and non-compensation among criteria.
In addition, setting a reference alternative in the decision-making process can reduce the impact of the loss-aversion bias (Lahtinen
et al.,
2020). The reference alternative in many methods, such as the TOPSIS, VIKOR and ARAS, consists of the best performance of alternatives under each criterion, and the optimal alternative is determined according to the principle of the closest distance from the reference alternative (the TOPSIS method not only sets this reference alternative, but also sets the worst reference alternative which consists of the worst performance of alternatives under each criterion, and the optimal alternative is determined according to the principle of farthest distance from the reference alternative). However, there are few methods using the average performance of alternatives under each criterion as the reference alternative, which determines the optimal alternative according to the principle of the longest positive distance from the reference alternative and the shortest negative distance from the reference alternative. Inspired by this idea, before the aggregation process, we set a virtual reference alternative which consists of the average performance of alternatives under each criterion. Such a reference alternative can comprehensively consider the good performance and bad performance of an alternative compared with other alternatives.
To sum up, this study is devoted to the following innovations:
-
1. Present a comprehensive normalization method which combines three linear normalization techniques based on the criterion types to reduce the deviations produced in the normalization process.
-
2. Set a virtual reference alternative which consists of the average performance of alternatives on each criterion to simultaneously consider the good performance and bad performance of an alternative compared with other alternatives.
-
3. Introduce two mixed aggregation operators from the perspectives of compensation and non-compensation among criteria to aggregate the distance value between each alternative and the reference alternative under each criterion, which can obtain multi-aspect and reliable ranking results of alternatives.
-
4. Propose the detailed operational procedure of the MACONT method, and apply this method to solve a selection problem of sustainable third-party reverse logistics providers.
The framework of this study is divided into the following parts: Section
2 reviews the normalization techniques and aggregation functions used in various MCDM methods. Section
3 proposes the mixed aggregation by comprehensive normalization technique (MACONT) method. Section
4 gives an illustrative example to demonstrate the applicability of the proposed method. Section
5 provides some sensitivity analyses and comparative analyses to highlight the advantages of the proposed method. The conclusion is drawn in Section
6.
3 The Mixed Aggregation by Comprehensive Normalization Technique (MACONT) Method
In this section, a new MCDM method called the Mixed Aggregation by COmprehensive Normalization Technique (MACONT) is presented. The main idea of this method is as follows: 1) normalize the performance values of alternatives over criteria by three normalization techniques; 2) synthesize the three normalized performance values; 3) set a virtual reference alternative; 4) combining the weights of criteria, use two mixed aggregation operators to integrate the distances between each alternative and the reference alternative; 5) based on integration of the subordinate comprehensive scores derived by two mixed aggregation operators, calculate the final comprehensive scores of alternatives, and then rank the alternatives according to the final comprehensive scores.
The specific implementation minds of this method in solving MCDM problems are as follows:
Firstly, for an MCDM problem, it is essential to establish a series of alternatives (
${a_{1}},{a_{2}},\dots ,{a_{i}},\dots ,{a_{m}}$) and criteria (
${c_{1}},{c_{2}},\dots ,{c_{j}},\dots ,{c_{n}}$) in advance. One or more experts are invited to provide the evaluation information for the performance of the alternatives over the criteria. According to the evaluation information, a decision matrix can be formed (if multiple experts are invited, the evaluation information provided by each expert can be integrated into a decision matrix by combining the weights of experts) as follows:
where
${x_{ij}}$ represents the performance value of the
ith alternative under the
jth criterion, and
$i=1,2,\dots ,m$,
$j=1,2,\dots ,n$.
Then, normalize the decision matrix respectively by three normalization techniques. The first normalization technique is the linear sum-based normalization technique, as shown in Eq. (
1), and the normalized value is represented by
${\hat{x}_{ij}^{1}}$. The second normalization technique is the linear ratio-based normalization technique, as shown in Eq. (
2), and the normalized value is represented by
${\hat{x}_{ij}^{2}}$. The third normalization technique is the linear max-min normalization technique, as shown in Eq. (
3), and the normalized value is represented by
${\hat{x}_{ij}^{3}}$. From the first normalization technique to the third normalization technique, the gap among the normalized performance values of alternatives under criteria is growing.
After the three kinds of normalized performance values of alternatives over criteria are obtained, to make the decision-making process flexible, two balance parameters,
λ and
μ, are introduced to integrate these normalized performance values, and the integration equation is as follows:
where
$0\leqslant \lambda $,
$\mu \leqslant 1$, and the values of these two balance parameters are determined by experts. If the experts pay more attention to the performance of an alternative in all alternatives, then
λ is assigned a larger value; if the experts want to highlight the best performance of alternatives, then
μ is assigned a larger value; if the experts emphasize a large gap between alternatives, that is, they highlight the best performance of alternatives but do not ignore the worst performance of alternatives, then
λ and
μ are assigned smaller values.
To illustrate the function of the comprehensive normalization technique in reducing deviations, we give an example here.
Example 1.
Suppose that there are three alternatives (
${a_{1}}$,
${a_{2}}$,
${a_{3}}$) and three criteria (
${c_{1}}$,
${c_{2}}$,
${c_{3}}$).
${c_{1}}$ and
${c_{2}}$ are benefit criteria and
${c_{3}}$ is a cost criterion. The decision matrix is given as:
By Eqs. (
1)–(
3), we can get three normalized matrices as:
If the weights of all criteria are the same, then, based on the arithmetic weighted aggregation operator, we can obtain the ranking results of the alternatives derived from the above three decision matrices as
${a_{1}}>{a_{3}}>{a_{2}}$,
${a_{1}}>{a_{2}}>{a_{3}}$ and
${a_{2}}>{a_{1}}>{a_{3}}$, respectively. The results of the three rankings are different, which implies that using a single normalization technique is easy to deviate from the original data and lead to unreliable results. Comparatively, by Eq. (
4), we can obtain a comprehensive normalized matrix as:
Let $\lambda =\mu =1/3$, the ranking results can be obtained as ${a_{1}}>{a_{2}}>{a_{3}}$, which deduces the deviation from the original data and synthesizes the ranking results of the alternatives derived from the above three decision matrices to make the results reliable.
After obtaining a normalized decision matrix, we calculate the average performance values
${\bar{x}_{j}}$ (
$j=1,2,\dots ,n$) of alternatives on each criterion to form a virtual reference alternative. Then, based on the distance between each alternative and the reference alternative, two subordinate comprehensive scores of each alternative,
${S_{1}}({a_{i}})$ and
${S_{2}}({a_{i}})$, are derived by the following two mixed aggregation operators:
where
${\rho _{i}}={\textstyle\sum _{j=1}^{n}}{w_{j}}({\hat{x}_{ij}}-{\bar{x}_{j}})$,
${Q_{i}}={\textstyle\prod _{\gamma =1}^{n}}{({\bar{x}_{j}}-{\hat{x}_{ij}})^{{w_{j}}}}/{\textstyle\prod _{\eta =1}^{n}}{({\hat{x}_{ij}}-{\bar{x}_{j}})^{{w_{j}}}}$, for
$i=1,2,\dots ,m$.
${w_{j}}$ (
$j=1,2,\dots ,n$) represent the weights of criteria determined by experts, and
${\textstyle\sum _{j=1}^{n}}{w_{j}}=1$.
γ (
$\gamma =1,2,\dots ,n$) represent the part of criteria that satisfy
${\hat{x}_{ij}}<{\bar{x}_{j}}$, and
η (
$\eta =1,2,\dots ,n$) represent the part of criteria that satisfy
${\hat{x}_{ij}}\geqslant {\bar{x}_{j}}$. In addition,
δ and
ϑ (
$0\leqslant \delta $,
$\vartheta \leqslant 1$) are preference parameters. If the experts pay more attention to the comprehensive performance of alternatives, the high value of
δ is given; if the experts pay more attention to the individual performance of alternatives, the small value of
δ is given. If the experts pay more attention to the best performance of alternatives, the high value of
ϑ is given; if the experts pay more attention to the worst performance of alternatives, the small value of
ϑ is given.
In Eq. (
5),
${\rho _{i}}$ and
${Q_{i}}$, respectively, employ the idea of arithmetic weighted aggregation operator and geometric weighted aggregation operator to aggregate the distances between each alternative and the virtual reference alternative under all criteria from the perspective of compensation effect among criteria. Moreover, inspired by the MULTIMOORA method, Eq. (
6) is a combination of the best performance and the worst performance of alternatives under all criteria, which considers the non-compensation effect among criteria.
Afterwards, the final comprehensive score
$S({a_{i}})$ of each alternative is computed by Eq. (
7), and the final ranking of alternatives can be obtained according to the comprehensive scores in descending orders. The alternative with the highest final comprehensive score is determined as the optimal alternative
It is noted that, for the accuracy and reliability of results, we need to use a normalization technique to ensure that the dimensions of the values of
${S_{1}}({a_{i}})$ and
${S_{2}}({a_{i}})$ are the same. But because the values of
${S_{1}}({a_{i}})$ and
${S_{2}}({a_{i}})$ may be negative, we adopt the vector normalization technique in Eq. (
7).
In summary, the procedure of the proposed MACONT method can be summarized as below:
Step 1. Give the evaluation information of alternatives and the criteria weights, and form a decision matrix based on the evaluation information.
Step 2. Normalize the decision matrix by Eqs. (
1)–(
3), and use Eq. (
4) to integrate the three normalized decision matrices.
Step 3. Set a virtual reference alternative by the average performance values of alternatives on each criterion, and calculate the subordinate comprehensive scores of alternatives by Eqs. (
5) and (
6).
Step 4. Obtain the final comprehensive scores of alternatives by Eq. (
7), and determine the ranking of alternatives and the optimal alternative.
4 An Illustration Example: Sustainable Third-Party Reverse Logistics Provider Selection
Recently, the selection problem of sustainable third-party reverse logistics provider has become a hot research topic (Govindan
et al.,
2018; Bai and Sarkis,
2019; Zarbakhshnia
et al.,
2018,
2019). Company R is a multi-national professional paint manufacturing enterprise. To reduce the cost of recycling logistics and enhance the sustainable development, company R needs to choose a suitable supplier. First of all, company R selected 8 providers
$({P_{1}},{P_{2}},{P_{3}},{P_{4}},{P_{5}},{P_{6}},{P_{7}},{P_{8}})$ from 26 related suppliers as candidate suppliers, and invited 6 experts with rich professional knowledge and experience to participate in the decision-making process. A series of evaluation criteria are established from three dimensions of sustainability, including:
-
• Economic dimension, such as quality, lead time, cost, delivery and services, relationship, and innovativeness;
-
• Environment dimension, such as pollution controls, resource consumption, remanufacture and reuse, green technology capability, and environmental management system;
-
• Social dimension, such as health and safety, employment stability, customer satisfaction, reputation, respect for the policy, and contractual stakeholders influence.
The details of the evaluation criteria are shown in Table
2. The weights of these criteria are determined by the experts as (0.048, 0.067, 0.085, 0.026, 0.017, 0.034, 0.098, 0.087, 0.065, 0.113, 0.046, 0.079, 0.047, 0.025, 0.072, 0.080, 0.011).
Table 2
The evaluation criteria of sustainable third-party reverse logistics providers.
Dimensions |
Criteria |
Type |
References |
Economic |
${c_{1}}$: Quality |
Benefit |
Govindan et al. (2018), Bai and Sarkis (2019), Zarbakhshnia et al. (2018, 2019) |
|
${c_{2}}$: Lead time |
Cost |
Bai and Sarkis (2019); Zarbakhshnia et al. (2018, 2019) |
|
${c_{3}}$: Cost |
Cost |
Govindan et al. (2018), Bai and Sarkis (2019), Zarbakhshnia et al. (2018, 2019) |
|
${c_{4}}$: Delivery and services |
Benefit |
Zarbakhshnia et al. (2018, 2019) |
|
${c_{5}}$: Relationship |
Benefit |
Govindan et al. (2018) |
|
${c_{6}}$: Innovativeness |
Benefit |
Bai and Sarkis (2019) |
Environment |
${c_{7}}$: Pollution controls |
Benefit |
Bai and Sarkis (2019) |
|
${c_{8}}$: Resource consumption |
Cost |
Bai and Sarkis (2019) |
|
${c_{9}}$: Remanufacture and reuse |
Benefit |
Zarbakhshnia et al. (2018, 2019) |
|
${c_{10}}$: Green technology capability |
Benefit |
Zarbakhshnia et al. (2018) |
|
${c_{11}}$: Environmental management system |
Benefit |
Govindan et al. (2018) |
Social |
${c_{12}}$: Health and safety |
Benefit |
Bai and Sarkis (2019), Zarbakhshnia et al. (2018, 2019) |
|
${c_{13}}$: Employment stability |
Benefit |
Zarbakhshnia et al. (2018) |
|
${c_{14}}$: Customer satisfaction |
Benefit |
Govindan et al. (2018), Zarbakhshnia et al. (2018, 2019) |
|
${c_{15}}$: Reputation |
Benefit |
Zarbakhshnia et al. (2019) |
|
${c_{16}}$: Respect for the policy |
Benefit |
Zarbakhshnia et al. (2019) |
|
${c_{17}}$: Contractual stakeholders influence |
Benefit |
Bai and Sarkis (2019) |
Below we use the proposed MACONT method to solve this problem.
Step 1. The experts evaluated the providers’ performance under each criterion and established a decision matrix:
Step 2. We utilize Eqs. (
1)–(
3) to calculate three normalized decision matrices:
Integrate the above three normalized decision matrices by Eq. (
4) to obtain a comprehensive decision matrix (here the two balance parameters are set as
$\lambda =0.4$ and
$\mu =0.3$):
Step 3. Compute the average performance values of the providers on each criterion to form a virtual reference provider
${P_{0}}$, which can be identified as (0.410, 0.395, 0.442, 0.381, 0.354, 0.369, 0.426, 0.366, 0.394, 0.398 0.316, 0.388, 0.417, 0.483, 0.382, 0.458, 0.340). Calculate the subordinate comprehensive values of the providers by Eqs. (
5) and (
6). Without loss of generality, we let the preference parameters
$\delta =0.5$ and
$\vartheta =0.5$. The results are displayed in Table
3.
Step 4. Calculate the final comprehensive values of providers by Eq. (
7), and rank the providers according to the descending order of the final comprehensive values. The ranking results of the providers are listed in Table
3. We can determine that the optimal provider is
${P_{8}}$.
Table 3
The ranking results of the providers derived by the proposed method.
Providers |
${\rho _{i}}$ |
${Q_{i}}$ |
${S_{1}}({P_{i}})$ |
${S_{2}}({P_{i}})$ |
$S({P_{i}})$ |
Rank |
${P_{1}}$ |
0.0207 |
1.6741 |
0.3074 |
0.0017 |
0.2029 |
4 |
${P_{2}}$ |
−0.0729 |
0.8401 |
−0.1595 |
0.0103 |
−0.3740 |
8 |
${P_{3}}$ |
0.0114 |
0.4684 |
0.1071 |
0.0011 |
0.0836 |
6 |
${P_{4}}$ |
0.0210 |
1.7719 |
0.3222 |
0.0018 |
0.2116 |
3 |
${P_{5}}$ |
−0.0141 |
0.6037 |
0.0297 |
0.0043 |
0.1382 |
5 |
${P_{6}}$ |
−0.0711 |
0.5186 |
−0.1968 |
0.0096 |
−0.3708 |
7 |
${P_{7}}$ |
0.0362 |
0.5762 |
0.2154 |
0.0082 |
0.3422 |
2 |
${P_{8}}$ |
0.0688 |
2.3379 |
0.5797 |
0.0040 |
0.4047 |
1 |
5 Sensitivity Analyses and Comparative Analyses
In this section, based on the data in Section
4, sensitivity analyses of the parameters set in the proposed method are carried out to explore the impact of the changes of parameters and criterion weights on the final ranking results of the alternatives. Moreover, other MCDM methods are applied to derive the ranking results of the alternatives, and the advantages of the proposed method are highlighted by comparing these results with that of the proposed method.
5.1 Sensitivity Analyses
(1) Sensitivity analyses on the balance parameters λ and μ.
Table 4
The ranking results derived by different values of the parameters λ and μ.
|
λ |
μ |
$S({P_{i}})$, $i=1,2,3,4,5,6,7,8$
|
Ranks |
Value |
0 |
0 |
$(0.1241,-0.3347,0.1257,0.2271,0.1175,-0.2730,0.3393,0.4267)$ |
(5, 7, 4, 3, 6, 8, 2, 1) |
|
0 |
0.5 |
$(0.2000,-0.3591,0.0942,0.2124,0.1422,-0.3632,0.3468,0.3988)$ |
(4, 7, 6, 3, 5, 8, 2, 1) |
|
0 |
1 |
$(0.1880,-0.4004,0.0632,0.1578,0.1810,-0.3333,0.3630,0.4150)$ |
(3, 8, 6, 5, 4, 7, 2, 1) |
|
0.2 |
0.2 |
$(0.1760,-0.3537,0.1064,0.2179,0.1284,-0.3728,0.3419,0.4188)$ |
(4, 7, 6, 3, 5, 8, 2, 1) |
|
0.2 |
0.6 |
$(0.1584,-0.3608,0.0798,0.1608,0.1638,-0.3497,0.3579,0.4231)$ |
(5, 8, 6, 4, 3, 7, 2, 1) |
|
0.5 |
0 |
$(0.1552,-0.3583,0.1048,0.2210,0.1162,-0.3839,0.3354,0.4356)$ |
(4, 7, 6, 3, 5, 8, 2, 1) |
|
0.5 |
0.5 |
$(0.1805,-0.4138,0.0526,0.1475,0.1702,-0.3453,0.3544,0.4283)$ |
(3, 8, 6, 5, 4, 7, 2, 1) |
|
0.6 |
0.2 |
$(0.2032,-0.3861,0.0739,0.2191,0.1358,-0.3749,0.3379,0.4008)$ |
(4, 8, 6, 3, 5, 7, 2, 1) |
|
1 |
0 |
$(0.1687,-0.4387,0.0327,0.1214,0.1552,-0.3570,0.3309,0.4253)$ |
(3, 8, 6, 5, 4, 7, 2, 1) |
In the process of integrating three normalized matrices, the two balance parameters
λ and
μ are introduced. It can be seen from Table
4 that the rankings of providers derived by different parameter values are different, which shows that experts need to determine parameter values according to actual conditions to ensure the accuracy of the results. Moreover, in the proposed method, if only one of the three normalization techniques is used, i.e.
$\lambda =1$ and
$\mu =0$ or
$\lambda =0$ and
$\mu =1$ or
$\lambda =0$ and
$\mu =0$, we can find from Table
4 that the ranking result deduced by the first two normalization techniques (Eqs. (
1) and (
2)) is (3, 8, 6, 5, 4, 7, 2, 1), while the ranking result deduced by the third normalization technique (Eq. (
3)) is (5, 7, 4, 3, 6, 8, 2, 1). Compared with the ranking result (3, 8, 6, 5, 4, 7, 2, 1) deduced by the comprehensive normalization technique in the proposed method, the comprehensive normalization technique effectively integrates three kinds of normalization techniques, and obtains a compromise ranking result.
(2) Sensitivity analysis of the preference parameter δ.
In the first mixed aggregation operator of the proposed method (i.e. Eq. (
5)), the preference parameter
δ is set to reasonably aggregate the comprehensive performance and individual performance of alternatives. From Table
5, it can be found that the change of this preference parameter value has little effect on the final ranking result. With the increase of the parameter value, the rank of
${P_{2}}$ rises, while the rank of
${P_{6}}$ falls, which shows that the comprehensive performance of
${P_{2}}$ is better than that of
${P_{6}}$, and the individual performance of
${P_{6}}$ is better than that of
${P_{2}}$.
(3) Sensitivity analysis of the preference parameter ϑ.
Table 5
The ranking results derived by different values of the preference parameter δ.
|
δ |
$S({P_{i}}),i=1,2,3,4,5,6,7,8$ |
Ranks |
Value |
0 |
$(0.2787,-0.1790,0.0943,0.2935,0.2061,-0.2013,0.3135,0.4354)$ |
(4, 7, 6, 3, 5, 8, 2, 1) |
|
0.1 |
$(0.2635,-0.2180,0.0922,0.2771,0.1925,-0.2352,0.3192,0.4292)$ |
(4, 7, 6, 3, 5, 8, 2, 1) |
|
0.2 |
$(0.2484,-0.2570,0.0900,0.2607,0.1789,-0.2691,0.3250,0.4231)$ |
(4, 7, 6, 3, 5, 8, 2, 1) |
|
0.3 |
$(0.2332,-0.2960,0.0879,0.2443,0.1653,-0.3030,0.3307,0.4170)$ |
(4, 7, 6, 3, 5, 8, 2, 1) |
|
0.4 |
$(0.2180,-0.3350,0.0857,0.2280,0.1517,-0.3369,0.3364,0.4108)$ |
(4, 7, 6, 3, 5, 8, 2, 1) |
|
0.5 |
$(0.2029,-0.3740,0.0836,0.2116,0.1382,-0.3708,0.3422,0.4047)$ |
(4, 8, 6, 3, 5, 7, 2, 1) |
|
0.6 |
$(0.1877,-0.4129,0.0815,0.1952,0.1246,-0.4047,0.3479,0.3986)$ |
(4, 8, 6, 3, 5, 7, 2, 1) |
|
0.7 |
$(0.1725,-0.4519,0.0793,0.1789,0.1110,-0.4386,0.3537,0.3924)$ |
(4, 8, 6, 3, 5, 7, 2, 1) |
|
0.8 |
$(0.1574,-0.4909,0.0772,0.1625,0.0974,-0.4725,0.3594,0.3863)$ |
(4, 8, 6, 3, 5, 7, 2, 1) |
|
0.9 |
$(0.1422,-0.5299,0.0751,0.1461,0.0838,-0.5064,0.3652,0.3801)$ |
(4, 8, 6, 3, 5, 7, 2, 1) |
|
1 |
$(0.1271,-0.5689,0.0729,0.1298,0.0702,-0.5403,0.3709,0.3740)$ |
(4, 8, 6, 3, 5, 7, 2, 1) |
In the second mixed aggregation operator of the proposed method, the preference parameter
ϑ is set to reasonably aggregate the best performance and the worst performance of alternatives. From Table
6, we can find that the change of this preference parameter value has a significant influence on the final ranking result. With the increase of the parameter value, the ranks of
${P_{5}}$,
${P_{6}}$,
${P_{7}}$ rise, while the ranks of
${P_{2}}$,
${P_{3}}$,
${P_{4}}$ fall, which shows that the best performance of
${P_{5}}$,
${P_{6}}$,
${P_{7}}$ is better than that of
${P_{2}}$,
${P_{3}}$,
${P_{4}}$, and the worst performance of
${P_{2}}$,
${P_{3}}$,
${P_{4}}$ is better than that of
${P_{5}}$,
${P_{6}}$,
${P_{7}}$.
Table 6
The ranking results derived by different values of the preference parameter ϑ.
|
ϑ |
$S({P_{i}})$, $i=1,2,3,4,5,6,7,8$
|
Ranks |
Value |
0 |
$(-0.0123,-0.3437,-0.0232,0.0074,-0.1365,-0.3720,-0.0129,0.1852)$ |
(3, 7, 5, 2, 6, 8, 4, 1) |
|
0.1 |
$(-0.0052,-0.3578,-0.0194,0.0144,-0.1247,-0.3844,0.0058,0.1954)$ |
(4, 7, 5, 2, 6, 8, 3, 1) |
|
0.2 |
$(0.0072,-0.3774,-0.0129,0.0264,-0.1051,-0.4013,0.0358,0.2121)$ |
(4, 7, 5, 3, 6, 8, 2, 1) |
|
0.3 |
$(0.1156,-0.4247,0.0416,0.1297,0.0401,-0.4312,0.2319,0.3299)$ |
(4, 7, 5, 3, 6, 8, 2, 1) |
|
0.4 |
$(0.0887,-0.4266,0.0283,0.1042,0.0069,-0.4367,0.1905,0.3038)$ |
(4, 7, 5, 3, 6, 8, 2, 1) |
|
0.5 |
$(0.2029,-0.3740,0.0836,0.2116,0.1382,-0.3708,0.3422,0.4047)$ |
(4, 8, 6, 3, 5, 7, 2, 1) |
|
0.6 |
$(0.3020,-0.2167,0.1294,0.3034,0.2287,-0.2078,0.4145,0.4673)$ |
(4, 8, 6, 3, 5, 7, 2, 1) |
|
0.7 |
$(0.3367,-0.1002,0.1442,0.3346,0.2472,-0.0923,0.4067,0.4752)$ |
(3, 8, 6, 4, 5, 7, 2, 1) |
|
0.8 |
$(0.3462,-0.0374,0.1477,0.3428,0.2459,-0.0314,0.3883,0.4705)$ |
(3, 8, 6, 4, 5, 7, 2, 1) |
|
0.9 |
$(0.3489,-0.0016,0.1483,0.3448,0.2417,0.0030,0.3734,0.4651)$ |
(3, 8, 6, 4, 5, 7, 2, 1) |
|
1 |
$(0.3495,0.0209,0.1482,0.3451,0.2377,0.0243,0.3624,0.4606)$ |
(3, 8, 6, 4, 5, 7, 2, 1) |
5.2 Comparative Analyses
In this subsection, we compare the proposed method with various MCDM methods, including the TOPSIS, VIKOR, WASPAS, ARAS, and MULTIMOORA. The reason for comparison with the TOPSIS method is that both methods use the idea of reference points. The reason for comparison with the VIKOR method is that both methods use the linear max-min normalization. The reason for comparison with the WASPAS method is that both methods use the linear ratio-based normalization technique and the combination of arithmetic weighted aggregation operator and geometric weighted aggregation operator. The reason for comparison with the ARAS method is that both methods use the sum-based normalization technique and arithmetic weighted aggregation operator. The reason for comparison with the MULTIMOORA method is that both methods take into account the compensation and non-compensation effects among criteria.
5.2.1 Comparative Analysis Between the Proposed Method and the TOPSIS Method
TOPSIS method, introduced by Hwang and Yoon in 1981, deduces the optimal alternative with the shortest distance from the positive ideal solution and the farthest distance from the negative ideal solution (Opricovic and Tzeng,
2004). The procedure of the TOPSIS method is as follows. First, normalize the decision matrix by the vector normalization technique (Eq. (
8)). Second, determine two ideal solutions
${P^{+}}$ and
${P^{-}}$ by Eqs. (
9) and (
10), respectively, and calculate the separation degrees of alternatives from two ideal solutions,
${D_{i}^{+}}$ and
${D_{i}^{-}}$, by Eqs. (
11) and (
12), respectively. Finally, calculate the relative closeness degrees of alternatives by Eq. (
13) to attain the ranking of alternatives. The results obtained by the TOPSIS method based on the data in Section
4 are shown in Table
7.
where
${\tilde{x}_{ij}}$ represents the normalized performance value of the
ith alternative under the
jth criterion. In Eqs. (
9) and (
10),
g is associated with the benefit criteria while
${g^{\prime }}$ is associated with the cost criteria.
Table 7
The results obtained by the TOPSIS method.
Providers |
${D_{i}^{+}}$ |
${D_{i}^{-}}$ |
$R{C_{i}}$ |
Ranks |
${P_{1}}$ |
0.0439 |
0.0645 |
0.5951 |
2 |
${P_{2}}$ |
0.0686 |
0.0374 |
0.3526 |
8 |
${P_{3}}$ |
0.0451 |
0.0503 |
0.5274 |
5 |
${P_{4}}$ |
0.0476 |
0.0608 |
0.5609 |
4 |
${P_{5}}$ |
0.0574 |
0.0483 |
0.4571 |
6 |
${P_{6}}$ |
0.0704 |
0.0388 |
0.3550 |
7 |
${P_{7}}$ |
0.0449 |
0.0641 |
0.5877 |
3 |
${P_{8}}$ |
0.0376 |
0.0659 |
0.6368 |
1 |
Comparing the ranking result of the proposed MACONT method and that of the TOPSIS method, except for ${P_{2}}$, ${P_{6}}$ and ${P_{8}}$, the ranks of other providers are different. Both methods set up a reference alternative to measure the distance between each alternative and the reference alternative. The main reason for the different results may be that the two methods adopt different normalization techniques, and the TOPSIS method needs to set up the best and worst reference alternatives to measure the distances between alternatives and the two reference alternatives, while the MACONT method only needs to set up one reference alternative to measure the good and bad performance of alternatives.
5.2.2 Comparative Analysis Between the Proposed Method and the VIKOR Method
VIKOR method, proposed by Opricovic in 1998, aims to find a compromise solution between maximum “group utility” of the “majority” and minimum “individual regret” of the “opponent” (Opricovic and Tzeng,
2007). The VIKOR method firstly normalizes each element in the decision matrix by Eq. (
14), and then computes the group utility value
${K_{i}}$ and the individual regret value
${R_{i}}$ by Eqs. (
15) and (
16), respectively. Next, the method calculates the compromise value
${C_{i}}$ by Eq. (
17). Finally, according to the ranks on
${K_{i}}$,
${R_{i}}$ and
${C_{i}}$, three ranking lists are obtained. The results deduced by the VIKOR method based on the data in Section
4 are shown in Table
8.
where
${\hat{x}_{ij}^{\ast }}$ represents the normalized performance value of the
ith alternative under the
jth criterion, and
α is a parameter whose value is determined by experts according to their preferences. Without loss of generality, we set
$\alpha =0.5$.
Table 8
The results obtained by the VIKOR method.
Providers |
${K_{i}}$ |
Rank |
${R_{i}}$ |
Rank |
${C_{i}}$ |
Ranks |
${P_{1}}$ |
0.4754 |
5 |
0.0800 |
6 |
0.4391 |
5 |
${P_{2}}$ |
0.6256 |
7 |
0.0980 |
7 |
0.8680 |
7 |
${P_{3}}$ |
0.4692 |
4 |
0.0590 |
2 |
0.2551 |
3 |
${P_{4}}$ |
0.4491 |
3 |
0.0720 |
4 |
0.3240 |
4 |
${P_{5}}$ |
0.5178 |
6 |
0.0753 |
5 |
0.4801 |
6 |
${P_{6}}$ |
0.6304 |
8 |
0.1130 |
8 |
1.0000 |
8 |
${P_{7}}$ |
0.4361 |
2 |
0.0637 |
3 |
0.2316 |
2 |
${P_{8}}$ |
0.3632 |
1 |
0.0521 |
1 |
0.0000 |
1 |
Comparing the ranking result deduced by the proposed MACONT method and that obtained by the VIKOR method, except for
${P_{7}}$ and
${P_{8}}$, the ranks of other providers are different. The reasons for this phenomenon may be as follows. In terms of normalization technique, the third normalization technique used in the MACONT method (Eq. (
3)) is similar to the normalization technique used in the VIKOR method (Eq. (
8)), but the larger the normalized value of an alternative in the proposed method is and the smaller the normalized value of an alternative is in the VIKOR method, the better the final rank of the alternative will be. Furthermore, the VIKOR method only uses one normalization technique, while the proposed method synthesizes three normalization techniques. In terms of aggregation operator, the VIKOR method applies the arithmetic weighted aggregation operator and considers the worst performance of alternatives over all criteria, while the MACONT method applies the combination of arithmetic weighted aggregation operator and arithmetic weighted aggregation operator; that is to say, the MACONT method considers the good and bad performance of alternatives on all criteria simultaneously.
5.2.3 Comparative Analysis Between the Proposed Method and the WASPAS Method
WASPAS method, introduced by Zavadskas
et al. (
2012), firstly normalizes each element in the decision matrix by the linear ratio-based normalization technique (Eq. (
2)), and then the normalized performance values of alternatives on all criteria are aggregated by the arithmetic weighted aggregation operator (Eq. (
18)) and the geometric weighted aggregation operator (Eq. (
19)). Afterwards, a parameter
β (here
$\beta =0.5$) is introduced to combine the values deduced by Eqs. (
18) and (
19). Finally, the comprehensive score of each alternative can be obtained by Eq. (
20) to determine the ranking of alternatives. The results deduced by the WASPAS method based on the data in Section
4 are shown in Table
9.
Table 9
The results obtained by the WASPAS method.
Providers |
${G_{i}^{1}}$ |
${G_{i}^{2}}$ |
${G_{i}}$ |
Ranks |
${P_{1}}$ |
0.7028 |
0.6715 |
0.6871 |
3 |
${P_{2}}$ |
0.5764 |
0.5245 |
0.5504 |
8 |
${P_{3}}$ |
0.6750 |
0.6619 |
0.6685 |
4 |
${P_{4}}$ |
0.6844 |
0.6412 |
0.6628 |
5 |
${P_{5}}$ |
0.6477 |
0.6021 |
0.6249 |
6 |
${P_{6}}$ |
0.5844 |
0.5306 |
0.5575 |
7 |
${P_{7}}$ |
0.7155 |
0.6762 |
0.6958 |
2 |
${P_{8}}$ |
0.7437 |
0.7088 |
0.7262 |
1 |
Comparing the ranking result of the proposed MACONT method and that of the WASPAS method, the ranks of ${P_{1}}$, ${P_{3}}$, ${P_{4}}$ and ${P_{5}}$ are different. Although both methods use the linear ratio-based normalization technique and the combination of arithmetic weighted aggregation operator and geometric weighted aggregation operator, the WASPAS method only considers one kind of normalization technique and the aggregation operator is aimed at aggregating the performance values of alternatives, while the MACONT method synthesizes three kinds of normalization techniques and the aggregation operator is aimed at aggregating the distances between each alternative and the virtual reference alternative.
5.2.4 Comparative Analysis Between the Proposed Method and the ARAS Method
ARAS method, presented by Zavadskas and Turskis (2010), firstly sets the optimal alternative
${P^{\prime }_{0}}({x_{01}},{x_{02}},\dots ,{x_{0n}})$ as the reference alternative by Eq. (
21), and then normalizes the decision matrix by the linear sum-based normalization technique (Eq. (
1)). Next, the normalized performance values of alternatives on all criteria are aggregated by the arithmetic weighted aggregation operator (Eq. (
22)). Afterwards, the utility degrees of alternatives can be calculated by Eq. (
23) to determine the ranking of alternatives in descending order. The results deduced by the ARAS method based on the data in Section
4 are shown in Table
10.
Table 10
The results obtained by the ARAS method.
Providers |
${Z_{i}}$ |
$U{D_{i}}$ |
Ranks |
${P^{\prime }_{0}}$ |
0.1593 |
1.0000 |
– |
${P_{1}}$ |
0.1123 |
0.7054 |
3 |
${P_{2}}$ |
0.0904 |
0.5678 |
8 |
${P_{3}}$ |
0.1066 |
0.6694 |
5 |
${P_{4}}$ |
0.1083 |
0.6798 |
4 |
${P_{5}}$ |
0.1012 |
0.6356 |
6 |
${P_{6}}$ |
0.0921 |
0.5781 |
7 |
${P_{7}}$ |
0.1126 |
0.7070 |
2 |
${P_{8}}$ |
0.1172 |
0.7358 |
1 |
Comparing the ranking result of the proposed MACONT method and that of the ARAS method, the ranks of ${P_{1}}$, ${P_{3}}$, ${P_{4}}$ and ${P_{5}}$ are different. Both methods use the linear sum-based normalization technique, but the MACONT method also integrates the other two normalization techniques. In terms of the aggregation methods, only the arithmetic weighted aggregation operator is used in the ARAS method, while the geometric weighted average operator is also used in the MACONT method. Furthermore, in the setting of the reference alternative, the ARAS method sets the best performance of alternatives on all criteria as the reference alternative and determines the alternative ranking according to the ratio of utility degrees of alternatives and the reference alternative, while the MACONT method sets the average performance of alternatives on all criteria as the reference alternative and determines the alternative ranking based on the distance between each alternative and the reference alternative.
5.2.5 Comparative Analysis Between the Proposed Method and the MULTIMOORA Method
MULTIMOORA method, proposed by Brauers and Zavadskas (
2010), exploits three subordinate ranking methods to obtain three ranking lists based on the decision matrix which is normalized by the vector normalization technique (Eq. (
8)). The first subordinate ranking method is the Ratio System, and the utility values of alternatives can be calculated by Eq. (
24). The second subordinate ranking method is the Reference Point Approach, and the utility values of alternatives can be calculated by Eq. (
25). The third subordinate ranking method is the Full Multiplicative Form, and the utility values of alternatives can be calculated by Eq. (
26). Afterwards, this method aggregates the three subordinate ranking results based on the dominance theory (Brauers and Zavadskas,
2011) to determine the final ranking of alternatives. The results derived by the MULTIMOORA method based on the data in Section
4 are shown in Table
11.
Table 11
The results obtained by the MULTIMOORA method.
Providers |
${Y_{i}^{1}}$ |
Ranks |
${Y_{i}^{2}}$ |
Ranks |
${Y_{i}^{3}}$ |
Ranks |
Final ranks |
${P_{1}}$ |
0.1973 |
3 |
0.0276 |
5 |
0.5883 |
3 |
4 |
${P_{2}}$ |
0.1309 |
7 |
0.0369 |
7 |
0.4595 |
8 |
7 |
${P_{3}}$ |
0.1853 |
5 |
0.0219 |
1 |
0.5799 |
4 |
3 |
${P_{4}}$ |
0.1909 |
4 |
0.0232 |
3 |
0.5617 |
5 |
5 |
${P_{5}}$ |
0.1524 |
6 |
0.0292 |
6 |
0.5275 |
6 |
6 |
${P_{6}}$ |
0.1303 |
8 |
0.0439 |
8 |
0.4648 |
7 |
8 |
${P_{7}}$ |
0.1999 |
2 |
0.0251 |
4 |
0.5924 |
2 |
2 |
${P_{8}}$ |
0.2150 |
1 |
0.0229 |
2 |
0.6210 |
1 |
1 |
Comparing the ranking result of the proposed MACONT method and that of the MULTIMOORA method, we can find that the ranks of other providers are different except for ${P_{1}}$, ${P_{7}}$ and ${P_{8}}$. Although the two methods are similar in the form of aggregation method, and both of them take into account the compensation and non-compensation effects among criteria, the two methods are quite different. On the one hand, the MULTIMOORA method only uses the vector normalization technique, while the MACONT method comprehensively uses three linear normalization techniques. On the other hand, the MULTIMOORA method divides the criteria into different types in the process of aggregation. It is easy to see that the MULTIMOORA method can only be applied to solve the MCDM problems with both cost and benefit criteria, while the MACONT method first divides the criteria types in the process of normalization, which reduces the amount of calculation to a certain extent and has a wider scope of application than the MULTIMOORA method.
The ranks of providers obtained by the proposed MACONT method and the aforementioned methods are displayed in Fig.
1. From this figure, we can find that the ranking results derived by each MCDM method are different, and the ranks result of the providers derived by the proposed MACONT method is a comprehensive solution.
Fig. 1
Comparison of the MACONT method and the other MCDM methods.
6 Conclusion
This study mainly proposed an MACONT method which involves a comprehensive normalization technique based on criterion types and two mixed aggregation operators to aggregate the distance values between each alternative and the reference alternative on different criteria from the perspectives of compensation and non-compensation. To testify the applicability of the proposed method, an illustration example regarding the selection of sustainable third-party reverse logistics providers was given. Through the sensitivity analyses and comparative analyses, we highlight that the proposed MACONT method has the following advantages:
-
1) It integrates three linear normalization techniques with respect to criterion types to make the normalized values reflect the original values synthetically, which is beneficial to reduce the deviations produced by single normalization techniques;
-
2) It measures the good performance and bad performance of one alternative compared with other alternatives by only one reference alternative. It is easy to operate and makes the results convincing;
-
3) It applies two mix aggregation operators to get a multi-aspect and reliable result from the perspectives of compensation and non-compensation among criteria;
-
4) It sets some parameters, enhances the application scope of the method, and enables experts to assign values to the parameters according to actual situations of decision-making problems, and thus the results are reasonable and reliable.
In this study, there is a deficiency that we did not analyse the impact of the change of criterion weights on the final result derived by the proposed method, because the number of criteria in the illustration example is large, and it is not easy to grasp the influence of the change of criterion weights on the ranking results. In the future, we will analyse this problem. In addition, we will consider to combine the proposed method with the fuzzy set theory, extending the proposed method to intuitionistic fuzzy environment, hesitant fuzzy linguistic environment and probabilistic linguistic environment to solve complex decision-making problems in various fields.