6.1 Illustrative Example
College plans to choose the most appropriate college faculty for tenure and promotion (Martinez,
2007). Three evaluation criteria, including teaching, research and service, are given. An expert team is invited to assess three faculty candidates. A linguistic term set
S = {
${s_{0}}$: extremely poor,
${s_{1}}$: very poor,
${s_{2}}$: poor,
${s_{3}}$: slightly poor,
${s_{4}}$: fair,
${s_{5}}$: slightly good,
${s_{6}}$: good,
${s_{7}}$: very good,
${s_{8}}$: extremely good} is given. Expert team assesses three faculty candidates (alternatives)
$A=\{{A_{1}},{A_{2}},{A_{3}}\}$ by the above linguistic term set
S with respect to three evaluation criteria:
${C_{1}}$: teaching,
${C_{2}}$: research and
${C_{3}}$: service.
Expert team may have different assessments for one alternative. For example, evaluating candidate
${A_{1}}$ with respect to
${C_{1}}$ teaching; one give the value 0.2 for slightly good and the value 0.8 for very good, others give the value 0.1 for slightly good and the value 0.7 for good. In this case,
${\mathit{LH}_{11}}$ can be denoted by a 2-TLHFS,
$\{(({s_{5}},0),0.1,0.2),(({s_{6}},0),0.7),(({s_{7}},0),0.8)\}$. Therefore, in order to reflect experts inconsistency and hesitancy, experts assessment values to every attribute are expressed by 2-TLHFS shown in Table
1.
For the weight of attribute, weight vector is unknown, but experts can provide partial information: ${w_{1}}\in [0.2,0.35]$, ${w_{2}}\in [0.3,0.45]$, ${w_{3}}\in [0.3,0.4]$.
Table 1
2-TLHFSs decision matrix.
|
${C_{1}}$ |
${C_{2}}$ |
${C_{3}}$ |
${A_{1}}$ |
$\{(({s_{5}},0),0.1,0.2),(({s_{6}},0),0.7),(({s_{7}},0),0.8)\}$ |
$\{(({s_{6}},0),0.9)\}$ |
$\{(({s_{6}},0),0.5,0.4),(({s_{7}},0),0.7)\}$ |
${A_{2}}$ |
$\{(({s_{5}},0),0.5,0.6),(({s_{6}},0),0.6,0.5)\}$ |
$\{(({s_{7}},0),0.6),(({s_{8}},0),0.7)\}$ |
$\{(({s_{6}},0),0.3,0.5,0.8)\}$ |
${A_{3}}$ |
$\{(({s_{5}},0),0.2),(({s_{6}},0),0.7,0.8)\}$ |
$\{(({s_{6}},0),0.8,0.9)\}$ |
$\{(({s_{7}},0),0.4,0.5),(({s_{8}},0),0.1)\}$ |
Using the proposed MADM approach based on 2-TLHFEWA operator, the following decision procedure is involved.
Step 1: The decision makers give their evaluated values of the alternative
${A_{i}}$ with respect to the attribute
${C_{j}}$. Then we can obtain a 2-tuple linguistic hesitant fuzzy decision matrix
$H={({\mathit{LH}_{ij}})_{m\times n}}$ (
$i=1,2,\dots ,m$;
$j=1,2,\dots ,n$) shown as Table
1. As each attribute
${C_{j}}$ (
$j=1,2,\dots ,n$) is benefit attribute, there is no need to normalize the decision matrix.
Step 2: The weight of attribute is unknown, so we should firstly determine the weight vectors. The procedure for the optimal weight vectors is as follows: Using Eqs. (
8) and (
9) to obtain the 2-tuple linguistic hesitant fuzzy PIS and NIS:
${{A_{1}}^{+}}=3.517$,
${{A_{1}}^{-}}=2.75$;
${{A_{2}}^{+}}=5.4$,
${{A_{2}}^{-}}=4.9$;
${{A_{3}}^{+}}=3.8$,
${{A_{3}}^{-}}=1.975$; Using Eqs. (
10) and (
11) to obtain the distances:
${{d_{1}}^{+}}=0.911$,
${{d_{1}}^{-}}=0.815$;
${{d_{2}}^{+}}=0.583$,
${{d_{2}}^{-}}=0.5385$;
${{d_{3}}^{+}}=1.921$,
${{d_{3}}^{-}}=2.198$; finally, we get the following linear programming model by minimizing the distance from every alternative to the positive ideal solution (PIS) and maximizing the distance to the negative ideal solution (NIS) with respect to each attribute.
By solving the above model, we obtain the optimal weight vector $w=(0.2,0.4,0.4)$.
Step 3: Based on the 2-TLHFEWA operator, we have (where $g=8$):
${\mathit{LH}_{1}}=\text{2-TLHFEWA}({\mathit{LH}_{11}},{\mathit{LH}_{12}},{\mathit{LH}_{13}})=\{(({s_{6}},-0.174),0.679,0.357,0.691,0.663),(({s_{6}},0.338),0.742,0.751),(({s_{6}},0),0.754,0.731),(({s_{6}},0.475),0.804),(({s_{6}},0.252),0.773,0.752),(({s_{7}},-0.327),0.820)\}$.
${\mathit{LH}_{2}}=\text{2-TLHFEWA}({\mathit{LH}_{21}},{\mathit{LH}_{22}},{\mathit{LH}_{23}})=\{(({s_{6}},0.338),0.471,0.542,0.679,0.493,0.562,0.694),(({s_{8}},0),0.523,0.589,0.714,0.544,0.608,0.728),(({s_{6}},0.475),0.493,0.562,0.694,0.471,0.542,0.679),(({s_{8}},0),0.544,0.608,0.728,0.523,0.589,0.714)\}$.
${\mathit{LH}_{3}}=\text{2-TLHFEWA}({\mathit{LH}_{31}},{\mathit{LH}_{32}},{\mathit{LH}_{33}})=\{(({s_{6}},0.338),0.571,0.604,0.663,0.691),(({s_{8}},0),0.478,0.585),(({s_{6}},0.475),0.654,0.682,0.731,0.754,0.678,0.706,0.752,0.773),(({s_{8}},0),0.574,0.665,0.604,0.690)\}$.
Step 4: By Definition
9, the expectation function
$E({\mathit{LH}_{i}})$ $(i=1,2,3)$ is acquired as follows:
Step 5: According to Definition
11, we have the following ranking:
${A_{1}}>{A_{3}}>{A_{2}}$.
Using the proposed MADM approach based on 2-TLHFEWG operator, the following decision procedure is involved.
Step 1: We can obtain a 2-tuple linguistic hesitant fuzzy decision matrix
$H={({\mathit{LH}_{ij}})_{m\times n}}$ (
$i=1,2,\dots ,m$;
$j=1,2,\dots ,n$) as Table
1. Because every attribute
${C_{j}}$ (
$j=1,2,\dots ,n$) is benefit attribute, there is no need to normalize the decision matrix.
Step 2: The weight of attribute is unknown, so we should firstly determine the weight vectors. By applying the model by similarity to ideal solution, we obtain the optimal weight vector $w=(0.2,0.4,0.4)$.
Step 3: Based on the 2-TLHFEWG operator, we have (where $g=8$):
${\mathit{LH}_{1}}=\text{2-TLHFEWA}({\mathit{LH}_{11}},{\mathit{LH}_{12}},{\mathit{LH}_{13}})=\{(({s_{6}},-0.207),0.496,0.455,0.554,0.509),(({s_{6}},0.182),0.571,0.634),(({s_{6}},0),0.689,0.638),(({s_{6}},0.394),0.778),(({s_{6}},0.196),0.708,0.657),(({s_{7}},-0.406),0.798)\}$.
${\mathit{LH}_{2}}=\text{2-TLHFEWA}({\mathit{LH}_{21}},{\mathit{LH}_{22}},{\mathit{LH}_{23}})=\{(({s_{6}},0.182),0.445,0.539,0.654,0.462,0.559,0.677),(({s_{7}},-0.432),0.477,0.575,0.695,0.495,0.596,0.718),(({s_{6}},0.394),0.462,0.559,0.677,0.445,0.539,0.654),(({s_{7}},-0.216),0.495,0.596,0.718,0.477,0.575,0.695)\}$.
${\mathit{LH}_{3}}=\text{2-TLHFEWA}({\mathit{LH}_{31}},{\mathit{LH}_{32}},{\mathit{LH}_{33}})=\{(({s_{6}},0.182),0.479,0.522,0.509,0.554),(({s_{7}},-0.432),0.289,0.310),(({s_{6}},0.394),0.603,0.652,0.638,0.689,0.621,0.671,0.657,0.708),(({s_{7}},-0.216),0.376,0.401,0.389,0.415)\}$.
Step 4: By Definition
9, the expectation function
$E({\mathit{LH}_{i}})$ $(i=1,2,3)$ is acquired as follows:
Step 5: According to Definition
11, we have the following ranking:
${A_{1}}>{A_{2}}>{A_{3}}$.
From the above analysis, the best choice in both cases is ${A_{1}}$. So we should choose college faculty ${A_{1}}$ for tenure and promotion.
6.2 Comparison Analysis
(1) A comparison with the approach based on HFLWA operator (Lin
et al.,
2014).
For the above numerical example, if the 2-TLHFEWA operator is replaced by the HFLWA operator, the decision making result is as follows. It is known for the weight vector of the attribute
$w=(0.2,0.4,0.4)$.
Based on the LHFWA operator, we have:
${\mathit{LH}_{1}}=\{({s_{5.8}},0.705,0.682,0.711,0.690),({s_{6.2}},0.0.759,0.765),({s_{6}},0.763,0.745),({s_{6.4}},0.807),({s_{6.2}},0.781,0.765),({s_{6.6}},0.822)\}$.
${\mathit{LH}_{2}}=\{({s_{6.2}},0.477,0.543,0.683,0.500,0.563,0.700),({s_{6.6}},0.534,0.592,0.7170,0.554,0.610,0.730),({s_{6.4}},0.500,0.563,0.700,0.477,0.543,0.683),({s_{6.8}},0.554,0.610,0.730,0.534,0.592,0.7170)\}$.
${\mathit{LH}_{3}}=\{({s_{6.2}},0.590,0.619,0.690,0.711),({s_{6.6}},0.518,0.635),({s_{6.4}},0.663,0.687,0.745,0.763,0.690,0.711,0.765,0.781),({s_{6.8}},0.604,0.700,0.635,0.723)\}$.
The expectation function
$E({\mathit{LH}_{i}})$ $(i=1,2,3)$ is acquired as follows:
So we have ranking:
${A_{1}}>{A_{3}}>{A_{2}}$.
According to the results, the LHFWA and the 2-TLHFEWA operators have the same ranking results and the most desirable alternative ${A_{1}}$.
(2) A comparison with the approach based on GLHFHSWA operator (Meng
et al.,
2014).
We apply proposed approach based on 2-TLHFEWA operator to Example
1 (Lin
et al.,
2014), and the partially known weight vector is given by:
${w_{1}}\in [0.2,0.35]$,
${w_{2}}\in [0.3,0.45]$,
${w_{3}}\in [0.3,0.4]$. So decision procedure is involved as follows.
Step 1: A LHFSs decision matrix
$\mathit{HF}$ as Table
1 (Lin
et al.,
2014).
Step 2: Translate LHFSs into 2-TLHFSs and obtain a 2-TLHFSs decision matrix H as Table
2. And because every attribute
${C_{j}}$ (
$j=1,2,\dots ,n$) is benefit attribute, it does not need to normalize decision matrix
H.
Table 2
2-TLHFSs of alternatives.
|
${C_{1}}$ |
${C_{2}}$ |
${C_{3}}$ |
${A_{1}}$ |
$\{(({s_{5}},0),0.1,0.2),(({s_{6}},0),0.4),(({s_{7}},0),0.3)\}$ |
$\{(({s_{6}},0),0.4),(({s_{7}},0),0.2,0.3)\}$ |
$\{(({s_{6}},0),0.2,0.4),(({s_{7}},0),0.3)\}$ |
${A_{2}}$ |
$\{(({s_{5}},0),0.2,0.4),(({s_{6}},0),0.3,0.5)\}$ |
$\{(({s_{7}},0),0.3,0.6),(({s_{8}},0),0.2)\}$ |
$\{(({s_{6}},0),0.3,0.5,0.8)\}$ |
${A_{3}}$ |
$\{(({s_{5}},0),0.2),(({s_{6}},0),0.3,0.5)\}$ |
$\{(({s_{5}},0),0.3,0.5),(({s_{6}},0),0.2,0.3),(({s_{7}},0),0.1)\}$ |
$\{(({s_{7}},0),0.3,0.5),(({s_{8}},0),0.1,0.3)\}$ |
Step 3: Weight vectors are unknown, so we firstly determine the weight vectors. Using the proposed model, we can obtain the following linear programming model to determine the best weight vector.
By solving the above model, we get the optimal weight vector $w=(0.35,0.3,0.35)$.
Step 4: Based on 2-TLHFEWA operator, we have (where $g=8$):
${\mathit{LH}_{1}}=\text{2-TLHFEWA}({\mathit{LH}_{11}},{\mathit{LH}_{12}},{\mathit{LH}_{13}})=\{(({s_{6}},-0.313),0.229,0.301,0.263,0.333),(({s_{6}},0.167),0.264,0.297),(({s_{6}},0.104),0.165,0.239,0.196,0.269,0.2,0.273,0.231,0.302),(({s_{7}},-0.494),0.201,0.232,0.236,0.266),(({s_{6}},0),0.333,0.4),(({s_{6}},0.422),0.366),(({s_{6}},0.367),0.273,0.343,0.302,0.371),(({s_{7}},-0.281),0.307,0.336),(({s_{6}},0.422),0.297,0.366),(({s_{7}},-0.237),0.331),(({s_{7}},-0.281),0.236,0.307,0.266,0.336),(({s_{7}},0),0.271,0.3)\}$.
${\mathit{LH}_{2}}=\text{2-TLHFEWA}({\mathit{LH}_{21}},{\mathit{LH}_{22}},{\mathit{LH}_{23}})=\{(({s_{6}},0.104),0.266,0.342,0.499,0.369,0.439,0.581,0.336,0.408,0.555,0.434,0.499,0.630),(({s_{8}},0),0.236,0.313,0.745,0.307,0.381,0.532),(({s_{6}},0.367),0.3,0.374,0.527,0.401,0.469,0.605,0.374,0.444,0.585,0.469,0.532,0.655),(({s_{8}},0),0.271,0.346,0.503,0.346,0.418,0.563)\}$.
${\mathit{LH}_{3}}=\text{2-TLHFEWA}({\mathit{LH}_{31}},{\mathit{LH}_{32}},{\mathit{LH}_{33}})=\{(({s_{6}},-0.08),0.266,0.342,0.331,0.404),(({s_{8}},0),0.196,0.266,0.264,0.331),(({s_{6}},0.167),0.236,0.313,0.266,0.342),(({s_{8}},0),0.165,0.236,0.196,0.266),(({s_{7}},-0.494),0.206,0.285),(({s_{8}},0),0.135,0.206),(({s_{6}},0.205),0.3,0.375,0.364,0.434,0.374,0.444,0.434,0.5),(({s_{8}},0),0.232,0.3,0.299,0.364,0.310,0.374,0.373,0.434),(({s_{6}},0.422),0.271,0.346,0.3,0.374,0.346,0.418,0.374,0.444),(({s_{8}},0),0.201,0.271,0.232,0.3,0.280,0.346,0.374,0.310),(({s_{7}},-0.281),0.242,0.319,0.319,0.392),(({s_{8}},0),0.172,0.242,0.252,0.319)\}$.
Step 5: By Definition
9, the expectation function
$E({\mathit{LH}_{i}})$ $(i=1,2,3)$ is acquired as follows:
Step 6: The ranking is ${A_{2}}>{A_{3}}>{A_{1}}$.
It is obvious that there are different orders, but the best choice in both cases is ${A_{2}}$.
By the analysis of the two above comparisons, the validity of new proposed aggregation operators is tested. And for the proposed MADM approach in this paper, there are four main differences from existing approaches.
Firstly, we proposed a new uncertain linguistic variable, 2-tuple linguistic hesitant fuzzy sets (2-TLHFSs), which can reflect decision makers’ uncertainty and hesitancy by providing the information about several possible linguistic terms of a linguistic variable and several possible membership degrees of each linguistic term. 2-TLHFSs have a wider range of application and can express and address rather complex multi-attribute decision-making problems that existing linguistic variables cannot address.
Secondly, 2-TLHFEWA and 2-TLHFEWG operators are based on 2-tuple linguistic representation model, 2-tuple linguistic representation model can make linguistic variable continuous and prevent information from losing in aggregation process. So 2-TLHFEWA and 2-TLHFEWG operators are also efficient and can avoid information loss and the lack of precision.
Thirdly, based on Einstein t-norm and t-conorm, we propose the new operational laws and 2-TLHFEWA and 2-TLHFEWG operators. The new operational laws are closed and can overcome granularity and logical problems. Compared with most aggregation operators based on Algebraic t-conorm and t-norm, the aggregation operators based on Einstein t-norm and t-conorm can provide another choice for decision makers.
Finally, we propose a model to deal with the situation where the weights information is unknown. The proposed model for optimal weight vector is advantaged and effective, which takes both subjective and objective weights information into consideration.