1 Introduction
Decision making is a popular field of study in the areas of Operations Research, Management Science, Medical Science, Data Mining, etc. Multi-attribute decision making (MADM) refers to making choice of an alternative from a finite set of alternatives. For solving MADM problem, there exist many well-known methods such as TOPSIS (Hwang and Yoon,
1981), VIKOR (Opricovic and Tzeng,
2004), PROMETHEE (Brans
et al.,
1986), ELECTRE (Roy,
1990), AHP (Satty,
1980), DEMATEL (Gabus and Fontela,
1972), MULTIMOORA (Brauers and Zavadskas,
2006,
2010), TODIM (Gomes and Lima,
1992a,
1992b), WASPAS (Zavadskas
et al.,
2014), COPRAS (Zavadskas
et al.,
1994), EDAS (Keshavarz Ghorabaee
et al.,
2015), MAMVA (Kanapeckiene
et al.,
2011), DNMA (Liao and Wu,
2019), etc. Wu and Liao (
2019) developed consensus-based probabilistic linguistic gained and lost dominance score method for multi-criteria group decision making problem. Hafezalkotob
et al. (
2019) proposed an overview of MULTIMOORA for multi-criteria decision making for theory, developments, applications, and challenges. Mi
et al. (
2019) surveyed on integrations and applications of the best worst method in decision making. Among those methods, TOPSIS method has gained a lot of attention in the past decade and many researchers have applied the method for solving MADM problems in different environments (Zavadskas
et al.,
2016). The weight information and attribute value generally carry imprecise value for MADM in uncertain environment, which is effectively dealt with fuzzy sets (Zadeh,
1965), intuitionistic fuzzy sets (Atanassov,
1986), hesitant fuzzy sets (Torra,
2010), and neutrosophic sets (Smarandache,
1998). Chen (
2000) introduced the TOPSIS method in fuzzy environment and considered the rating value of the alternative and attribute weight in terms of triangular fuzzy number. Boran
et al. (
2009) extended the TOPSIS method for multi-criteria group decision making under the intuitionistic fuzzy set to solve supplier selection problem. Ye (
2010) extended the TOPSIS method with interval valued intuitionistic fuzzy number. Xu and Zhang (
2013) proposed TOPSIS method for MADM under the hesitant fuzzy set with incomplete weight information. Fu and Liao (
2019) developed TOPSIS method for multi-expert qualitative decision making involving green mine selection under unbalanced double hierarchy linguistic term set.
Neutrosophic set is a generalization of the fuzzy set, hesitant fuzzy set and intuitionistic fuzzy set. It has three membership functions – truth membership, falsity membership and indeterminacy membership functions. This set has been successfully applied in various decision making problems (Peng
et al.,
2014; Ye,
2014; Kahraman and Otay,
2019; Stanujkic
et al.,
2017). Biswas
et al. (
2016a) proposed TOPSIS method for multi-attribute group decision making under single valued neutrosophic environment. Biswas
et al. (
2019a) further extended TOPSIS method using non-linear programming approach to solve multi-attribute group decision making. Chi and Liu (
2013) developed TOPSIS method based on interval neutrosophic set. Ye (
2015a) extended the TOPSIS method for single valued linguistic neutrosophic number. Biswas
et al. (
2018) developed the TOPSIS method for single valued trapezoidal neutrosophic number and Giri
et al. (
2018) proposed the TOPSIS method for interval trapezoidal neutrosophic number by considering unknown attribute weight.
In decision making problem, decision makers may sometime hesitate to assign a single value for rating the alternatives due to doubt or incomplete information. Instead, they prefer to assign a set of possible values to represent the membership degree for any element to the set. To deal with the issue, Torra (
2010) coined the idea of hesitant fuzzy set, which is a generalization of fuzzy set and intuitionistic fuzzy set. Until then, hesitant fuzzy set has been successfully applied in decision making problems (Xia and Xu,
2011; Rodriguez
et al.,
2012; Zhang and Wei,
2013). Xu and Xia (
2011a,
2011b) proposed a variety of distance measures for hesitant fuzzy set. Wei (
2012) introduced hesitant fuzzy prioritized operators for solving MADM problem. Beg and Rashid (
2013) proposed TOPSIS method for MADM with hesitant fuzzy linguistic term set. Liao and Xu (
2015) developed approaches to manage hesitant fuzzy linguistic information based on the cosine distance and similarity measures for HFLTSs and their application in qualitative decision making. Joshi and Kumar (
2016) introduced Choquet integral based TOPSIS method for multi-criteria group decision making with interval valued intuitionistic hesitant fuzzy set.
However, hesitant fuzzy set can not present inconsistent, imprecise, inappropriate and incomplete information because the set has only truth hesitant membership degree to express any element to the set. To handle this problem, Ye (
2015b) introduced single valued neutrosophic hesitant fuzzy sets (SVNHFS) which have three hesitant membership functions – truth membership, indeterminacy membership and falsity membership functions. Interval neutrosophic hesitant fuzzy sets (INHFS) (Liu and Shi,
2015), a generalization of SVNHFS, are also powerful to resolve the difficulty in decision making problem. Ye (
2016) developed correlation coefficients of interval neutrosophic hesitant fuzzy sets and its application in the MADM method. SVNHFS and INHFS further give possibility to handle uncertain, incomplete, inconsistent information in real world decision making problems. Sahin and Liu (
2017) defined correlation coefficient of SVNHFS and applied it in decision making problems. Biswas
et al. (
2016b) proposed GRA method for MADM with SVNHFS for known attribute weight. Ji
et al. (
2018) proposed a projection–based TODIM approach under multi-valued neutrosophic environments for personnel selection problem. Biswas
et al. (
2019b) further extended the GRA method for solving MADM with SVNHFS and INHFS for partially known or unknown attribute weight.
Until now, little research has been done on the TOPSIS method for solving MADM under SVNHFS and INHFS environments. We also observe that the TOPSIS method has not been studied earlier under SVNHFS as well as INHFS environment for solving MADM problems, when the weight information of the attribute is incompletely known or completely unknown. Therefore, we have an opportunity to extend the traditional methods or to propose some new methods for TOPSIS to deal with MADM problems with partially known or unknown weight information under SVNHFS and INHFS environments, which can play an effective role to deal with uncertain and indeterminate information in MADM problems.
In view of the above context, we have the following objectives in this study:
-
• To formulate an SVNHFS based MADM problem, where the weight information is incompletely known and completely unknown.
-
• To determine the weights of attributes given in incompletely known and completely unknown forms using deviation method.
-
• To extend the TOPSIS method for solving an SVNHFS based MADM problem using the proposed optimization model.
-
• To further extend the proposed approach in INHFS environment.
-
• To validate the proposed approach with two numerical examples.
-
• To compare the proposed method with some existing methods.
The remainder of this article is organized as follows. Section
2 gives preliminaries for neutrosophic set, single valued neutrosophic set, interval neutrosophic sets, hesitant fuzzy set, SVNHFS and INHFS. Section
2 also represents score function, accuracy function and distance function of SVNHFS and INHFS. Section
3 and Section
4 develop TOPSIS method for MADM under SVNHFS and INHFS, respectively. Section
5 presents two numerical examples to validate the proposed method and provides a comparative study between the proposed method and existing methods. Finally, conclusion and future research directions are given in Section
6.
3 TOPSIS Method for MADM with SVNHFS Information
In this section, we propose TOPSIS method to find out the best alternative in MADM with SVNHFSs. Suppose that
$
A=\{{A_{1}},{A_{2}},\dots ,{A_{m}}\}$ be the discrete set of
m alternatives and
$
C=\{{C_{1}},{C_{2}},\dots ,{C_{n}}\}$ be the set of
n attributes for a SVNHFSs based multi-attribute decision making problem. Also, assume that the rating value of the
i-th alternative
$
{A_{i}}$
$
(i=1,2,\dots ,m)$ over the attribute
$
{C_{j}}$
$
(j=1,2,\dots ,n)$ is considered with SVNHFSs
$
{x_{ij}}=({t_{ij}},{i_{ij}},{f_{ij}})$, where
$
{t_{ij}}=\{{\gamma _{ij}}\mid {\gamma _{ij}}\in {t_{ij}},0\leqslant {\gamma _{ij}}\leqslant 1\}$,
$
{i_{ij}}=\{{\delta _{ij}}\mid {\delta _{ij}}\in {i_{ij}},0\leqslant {\delta _{ij}}\leqslant 1\}$ and
$
{f_{ij}}=\{{\eta _{ij}}\mid {\eta _{ij}}\in {f_{ij}},0\leqslant {\eta _{ij}}\leqslant 1\}$ indicate the possible truth, indeterminacy and falsity membership degrees of the
i-th alternative
$
{A_{i}}$ over the
j-th attribute
$
{C_{j}}$ for
$
i=1,2,\dots ,m$ and
$
j=1,2,\dots ,n$. Then we can construct a SVNHFS based decision matrix
$
X={({x_{ij}})_{m\times n}}$ which has entries as the SVNHFSs and can be written as
Now, we extend the TOPSIS method for MADM in single-valued neutrosophic hesitant fuzzy environment. Before going to discuss in details, we briefly mention some important steps of the proposed model. First, we consider the weights of attributes which may be known, incompletely known or completely unknown. For known cases, we easily employ the weights of attributes in the TOPSIS method with SVNHFs. But the problem arises for later two cases, because we can not employ the incomplete or unknown weights directly in the TOPSIS method under neutrosophic hesitant fuzzy environment. To deal with the issue, we develop optimization models to determine the exact weights of attributes using maximum deviation method (Yingming,
1997). Following TOPSIS method, we then determine the Hamming distance measure of each alternative from the positive and negative ideal solutions. Finally, we obtain the relative closeness co-efficient of each alternative to determine the most preferred alternative.
We elaborate the following steps used in the proposed model.
Determine the weights of attributes.
If the information of attribute weights is completely known and is given as
$
w={({w_{1}},{w_{2}},\dots ,{w_{n}})^{T}}$, with
$
{w_{j}}\in [0,1]$ and
$
{\textstyle\sum _{j=1}^{n}}{w_{j}}=1$, then go to Step 2.
However, in case of real decision making, due to time pressure, lack of knowledge or decision makers’ limited expertise in the public domain, the information about the attribute weights is often incompletely known or completely unknown. In this situation, when the attribute weights are partially known or completely unknown, we use the maximizing deviation method proposed by Yingming (
1997) to deal with MADM problems. For an MADM problem, Yingming suggested that when an attribute has a larger deviation among the alternatives, a larger weight should be assigned and when an attribute has a smaller deviation among the alternatives, a smaller weight should be assigned, and when an attribute has no deviation, zero weight should be assigned.
Now, we develop an optimization model based on maximizing deviation method to determine the optimal relative weights of attributes under SVNHF environment. For the attribute
$
{C_{j}}\in C$, the deviation of alternative
$
{A_{i}}$ to all the other alternatives can be defined as
In Eq. (
6), the Hamming distance
$
D({x_{ij}},{x_{kj}})$ is defined as
where
and
$
{l_{{t_{ij}}}}$,
$
{l_{{i_{ij}}}}$ and
$
{l_{{f_{ij}}}}$ denote the numbers of possible membership values in
$
{x_{il}}$ for
$
l=j,k$.
We now consider the deviation values of all alternatives to other alternatives for the attribute
$
{x_{j}}\in X$
$
(j=1,2,\dots ,n)$:
The information about the attribute weights is incomplete.
In this case, we develop some model to determine the attribute weights. Suppose that the attribute’s incomplete weight information
H is given by
-
1. A weak ranking:
$
\{{w_{i}}\geqslant {w_{j}}\}$,
$
i\ne j$;
-
2. A strict ranking:
$
\{{w_{i}}-{w_{j}}\geqslant {\epsilon _{i}}(>0)\}$,
$
i\ne j$;
-
3. A ranking of difference:
$
\{{w_{i}}-{w_{j}}\geqslant {w_{k}}-{w_{p}}\}$,
$
i\ne j\ne k\ne p$;
-
4. A ranking with multiples:
$
\{{w_{i}}\geqslant {\alpha _{i}}{w_{j}}\}$,
$
0\leqslant {\alpha _{i}}\leqslant 1,i\ne j$;
-
5. An interval form:
$
\{{\beta _{i}}\leqslant {w_{i}}\leqslant {\beta _{i}}+{\epsilon _{i}}(>0)\}$,
$
0\leqslant {\beta _{i}}\leqslant {\beta _{i}}+{\epsilon _{i}}\leqslant 1$.
For these cases, we construct the following constrained optimization model based on the set of known weight information
H:
By solving Model-1, we can obtain the optimal solution
$
w={({w_{1}},{w_{2}},\dots ,{w_{n}})^{T}}$, which can be used as the weight vector of the attributes to proceed to Step 2.
The information about the attribute weights is completely unknown.
In this case, we develop the following non-linear programming model to select the weight vector
W, which maximizes all deviation values for all the attributes:
The Lagrange function corresponding to the above constrained optimization problem is given by
where
λ is a real number denoting the Lagrange multiplier. The partial derivatives of
L with respect to
$
{w_{j}}$ and
λ are given by
It follows from Eq. (
15) that
for
$
i=1,2,\dots ,m$.
Putting this value of
$
{w_{j}}$ in (
16), we get
or
where
$
\lambda <0$ and
represents the sum of deviations of all the attributes with respect to the
j-th attribute and
represents the sum of deviations of all the alternatives with respect to all the attributes.
Then by combining equations (
17) and (
19), we obtain weight
$
{w_{j}}$ for
$
j=1,2,\dots ,n$ as
We make the sum of
$
{w_{j}}$
$
(j=1,2,\dots ,n)$ into a unit to normalize the weight of the
j-th attribute:
and consequently, we obtain the weight vector of the attribute as
for proceeding to Step-2.
Determine the positive ideal alternative and negative ideal alternative.
From decision matrix
$
X={({x_{ij}})_{m\times n}}$, we can determine the single valued neutrosophic hesitant fuzzy positive ideal solution
$
{A^{+}}$ and the single valued neutrosophic hesitant fuzzy negative ideal solution (SVNHFNIS)
$
{A^{-}}$ of alternatives as follows:
Here, we compare the attribute values
$
{x_{ij}}$ by using score, accuracy and certainty values of SVNHFEs defined in Definition
7.
Determine the distance measure from the ideal alternatives to each alternative.
In order to determine the distance measure between the positive ideal alternative
$
{A^{+}}$ and the alternative
$
{A_{i}}$, we use the following equation:
for
$
i=1,2,\dots ,m$. Similarly, we can determine the distance measure between the negative ideal alternative
$
{A^{-}}$ and the alternative
$
{A_{i}}$
$
(i=1,2,\dots ,m)$ by the following equation:
for
$
i=1,2,\dots ,m$.
Determine the relative closeness coefficient.
We determine closeness coefficient
$
{C_{i}}$ for each alternative
$
{A_{i}}$
$
(i=1,2,\dots ,m)$ with respect to SVNHFPIS
$
{A^{+}}$ by using the following equation:
where
$
0\leqslant {C_{i}}\leqslant 1$
$
(i=1,2,\dots ,m)$. We observe that an alternative
$
{A_{i}}$ is closer to the SVNHFPIS
$
{A^{+}}$ and farther to the SVNHFNIS
$
{A^{-}}$ as
$
{C_{i}}$ approaches unity.
We can rank the alternatives according to the descending order of relative closeness coefficient values of alternatives to determine the best alternative from a set of feasible alternatives.
4 TOPSIS Method for MADM with INHFS Information
In this section, we further extend the proposed model into interval neutrosophic hesitant fuzzy environment.
For an MADM problem, let
$
A=({A_{1}},{A_{2}},\dots ,{A_{m}})$ be a set of alternatives,
$
C=({C_{1}},{C_{2}},\dots ,{C_{n}})$ be a set of attributes, and
$
\tilde{W}={({\tilde{w}_{1}},{\tilde{w}_{2}},\dots ,{\tilde{w}_{n}})^{T}}$ be the weight vector of the attributes such that
$
{\tilde{w}_{j}}\in [0,1]$ and
$
{\textstyle\sum _{j=1}^{n}}{\tilde{w}_{j}}=1$.
Suppose that
$
\tilde{X}={({\tilde{x}_{ij}})_{m\times n}}$ be the decision matrix where
$
{\tilde{x}_{ij}}$ be the INHFS for the alternative
$
{A_{i}}$ with respect to the attribute
$
{C_{j}}$ and
$
{\tilde{x}_{ij}}=({\tilde{t}_{ij}},{\tilde{i}_{ij}},{\tilde{f}_{ij}})$, where
$
{\tilde{t}_{ij}},{\tilde{i}_{ij}}$, and
$
{\tilde{f}_{ij}}$ are truth, indeterminacy and falsity membership degree, respectively. The decision matrix is given by
Now, we develop TOPSIS method based on INHFS when the attribute weights are completely known, partially known or completely unknown.
Determine the weights of the attributes.
We suppose that attribute weights are completely known, partially known or completely unknown. We use the maximum deviation method when the attribute weights are partially known or completely unknown.
The information of attribute weights is completely known
Assume the attribute weights as
$
\tilde{w}$ =
$
{({\tilde{w}_{1}},{\tilde{w}_{2}},\dots ,{\tilde{w}_{n}})^{T}}$ with
$
{\tilde{w}_{j}}\in [0,1]$ and
$
{\textstyle\sum _{j=1}^{n}}{\tilde{w}_{j}}=1$ and then go to Step 2.
For partially known or completely unknown attribute weights, we calculate the deviation values of the alternative
$
{A_{i}}$ to other alternatives under the attribute
$
{C_{j}}$ defined as follows:
Using (
7), the Hamming distance
$
\tilde{D}({\tilde{x}_{ij}},{\tilde{x}_{kj}})$ is obtained as
where
and
$
{l_{{\tilde{t}_{ij}}}}$,
$
{l_{{\tilde{i}_{ij}}}}$ and
$
{l_{{\tilde{f}_{ij}}}}$ are numbers of possible membership values in
$
{x_{il}}$ for
$
l=j,k$.
The deviation values of all the alternatives to the other alternatives for the attribute
$
{C_{j}}$
$
(j=1,2,\dots ,n)$ can be obtained from the following:
The information of attribute weights is partially known
In this case, we assume a non-linear programming model to calculate attribute weights.
where
$
\tilde{H}$ is a set of partially known weight information.
Solving Model-3, we can get the optimal attribute weight vector.
The information of attribute weights is completely unknown
In this case, we consider the following model:
The Lagrangian function corresponding to the above nonlinear programming problem is given by
where
$
\tilde{\lambda }$ is the Lagrange multiplier. Then the partial derivatives of
$
\tilde{L}$ are computed as
It follows from Eq. (
34) that the weight
$
{\tilde{w}_{j}}$ for
$
i=1,2,\dots ,m$ is
Putting
$
{w_{j}}$ in Eq. (
35), we get
or
where
$
\tilde{\lambda }<0$ and
$
{\textstyle\sum _{i=1}^{m}}{\textstyle\sum _{k=1}^{m}}(\tilde{\Delta }T({\tilde{x}_{ij}},{\tilde{x}_{kj}})+\Delta \tilde{I}({\tilde{x}_{ij}},{\tilde{x}_{kj}})+\Delta \tilde{F}({\tilde{x}_{ij}},{\tilde{x}_{kj}}))$ represents the sum of deviations of all the attributes with respect to the
j-th attribute and
$
{\textstyle\sum _{j=1}^{n}}{({\textstyle\sum _{i=1}^{m}}{\textstyle\sum _{k=1}^{m}}(\tilde{\Delta }T({\tilde{x}_{ij}},{\tilde{x}_{kj}})+\Delta \tilde{I}({\tilde{x}_{ij}},{\tilde{x}_{kj}})+\Delta \tilde{F}({\tilde{x}_{ij}},{\tilde{x}_{kj}})))^{2}}$ represents the sum of deviations of all the alternatives with respect to all the attributes.
Then combining Eqs. (
36) and (
38), we obtain the weight
$
{\tilde{w}_{j}}$
$
(j=1,2,\dots ,n)$ as
We make the sum of
$
{w_{j}}$
$
(j=1,2,\dots ,n)$ into a unit to normalize the weight of the
j-th attribute:
and consequently, we obtain the weight vector of the attribute as
for proceeding to Step-2.
Determine the positive ideal alternative and the negative ideal alternative.
From decision matrix
$
\tilde{X}={({\tilde{x}_{ij}})_{m\times n}}$, we determine the interval neutrosophic hesitant fuzzy positive ideal solution (INHFPIS)
$
{A^{+}}$ and the interval neutrosophic hesitant fuzzy negative ideal solution (INHFNIS)
$
{A^{-}}$ of alternatives as follows:
Here, we compare the attribute values
$
{\tilde{x}_{ij}}$ by using score, accuracy and certainty values of INHFSs defined in Definition
9.
Determine the distance measure from the ideal alternatives to each alternative.
We determine the distance measure between the positive ideal alternative
$
{A^{+}}$ and the alternative
$
{A_{i}}$
$
(i=1,2,\dots ,m)$ as follows:
for
$
i=1,2,\dots ,m$. Similarly, we determine the distance measure between the negative ideal alternative
$
{A^{-}}$ and the alternative
$
{A_{i}}$
$
(i=1,2,\dots ,m)$ as follows:
Determine the closeness coefficient.
In this step, we calculate closeness coefficient
$
{C_{i}}$ for each alternative
$
{A_{i}}$
$
(i=1,2,\dots ,m)$ with respect to INHFPIS
$
{\tilde{A}^{+}}$ as given below:
where
$
0\leqslant {\tilde{C}_{i}}\leqslant 1$
$
(i=1,2,\dots ,m)$. We observe that the alternative
$
{A_{i}}$ is closer to the INHFPIS
$
{\tilde{A}^{+}}$ and farther to the INHFNIS
$
{A^{-}}$ as
$
{\tilde{C}_{i}}$ approaches unity.
Finally, we can rank the alternatives according to the descending order of relative closeness coefficient values of alternatives to choose the best alternative from a set of feasible alternatives.
We briefly present the steps of the proposed strategies in Fig.
1.
Fig. 1.
The schematic diagram of the proposed method.
5 Numerical Examples
In this section, we consider two examples to illustrate the utility of the proposed method for single valued neutrosophic hesitant fuzzy set (SVNHFS) and interval hesitant fuzzy set (INHFS).
5.1 Example for SVNHFS
Suppose that an investment company wants to invest a sum of money in the following four alternatives:
-
• car company
$
({A_{1}})$;
-
• food company
$
({A_{2}})$;
-
• computer company
$
({A_{3}})$;
-
• arms company
$
({A_{4}})$.
The company considers the following three attributes to make the decision:
-
• risk analysis
$
({C_{1}})$;
-
• growth analysis
$
({C_{2}})$;
-
• environment impact analysis
$
({C_{3}})$.
We assume the rating values of the alternatives
$
{A_{i}}$,
$
i=1,2,3,4$ with respect to attributes
$
{C_{j}}$,
$
j=1,2,3$ and get the SVNHFS matrix presented in Table
1. The steps to get the best alternative are as follows:
Table 1
Single valued neutrosophic hesitant fuzzy decision matrix.
|
$
{C_{1}}$ |
$
{C_{2}}$ |
$
{C_{3}}$ |
$
{A_{1}}$ |
$
\langle \{0.3,0.4,0.5\},\{0.1\},\{0.3,0.4\}\rangle $ |
$
\langle \{0.5,0.6\},\{0.2,0.3\},\{0.3,0.4\}\rangle $ |
$
\langle \{0.2,0.3\},\{0.1,0.2\},\{0.5,0.6\}\rangle $ |
$
{A_{2}}$ |
$
\langle \{0.6,0.7\},\{0.1,0.2\},\{0.2,0.3\}\rangle $ |
$
\langle \{0.6,0.7\},\{0.1\},\{0.3\}\rangle $ |
$
\langle \{0.6,0.7\},\{0.1,0.2\},\{0.1,0.2\}\rangle $ |
$
{A_{3}}$ |
$
\langle \{0.5,0.6\},\{0.4\},\{0.2,0.3\}\rangle $ |
$
\langle \{0.6\},\{0.3\},\{0.4\}\rangle $ |
$
\langle \{0.5,06\},\{0.1\},\{0.3\}\rangle $ |
$
{A_{4}}$ |
$
\langle \{0.7,0.8\},\{0.1\},\{0.1,0.2\}\rangle $ |
$
\langle \{0.6,0.7\},\{0.1\},\{0.2\}\rangle $ |
$
\langle \{0.3,0.5\},\{0.2\},\{0.1,0.2,0.3\}\rangle $ |
Step 1: Determine the weights of attributes.
There are three cases for attribute weights:
Case 1: When the attribute weights are completely known, let the weight vector be
$
{w^{N}}=(0.35,0.25,0.40)$.
Case 2: When the attribute weights are partially known, the weight information is as follows:
Using Model-1, we get the single objective programming problem as
Solving this problem with optimization software LINGO 11, we get the optimal weight vector as
$
{w^{N}}=(0.35,0.20,0.45)$.
Case 3: When the attribute weights are completely unknown, using Model-2 and Eqs. (
20) and (
21), we obtain the following weight vector:
Step 2: Determine the positive ideal alternative and the negative ideal alternative.
In this step, we calculate the positive and the negative ideal solutions from Eqs. (
22) and (
23), respectively.
Step 3: Determine the distance measure from the ideal alternatives to each alternative.
In this step, we determine the distance measure from the positive and negative ideal solutions from Eqs. (
24) and (
25) as given in Tables
2 and
3.
Table 2
Distance measure from the positive ideal solution.
$
{D^{+}}({A_{i}})$ |
Case 1 |
Case 2 |
Case 3 |
$
{D_{1}^{+}}$ |
0.210 |
0.210 |
0.201 |
$
{D_{2}^{+}}$ |
0.037 |
0.035 |
0.037 |
$
{D_{3}^{+}}$ |
0.140 |
0.145 |
0.148 |
$
{D_{4}^{+}}$ |
0.046 |
0.052 |
0.044 |
Table 3
Distance measure from the negative ideal solution.
$
{D^{-}}({A_{i}})$ |
Case 1 |
Case 2 |
Case 3 |
$
{D_{1}^{-}}$ |
0.180 |
0.164 |
0.198 |
$
{D_{2}^{-}}$ |
0.176 |
0.183 |
0.173 |
$
{D_{3}^{-}}$ |
0.120 |
0.115 |
0.102 |
$
{D_{4}^{-}}$ |
0.181 |
0.182 |
0.180 |
Step 4: Determine the relative closeness coefficient.
We now calculate the relative closeness coefficients from Eq. (
26) and the results are shown in Table
4.
Table 4
Relative closeness coefficient.
$
RC({A_{i}})$ |
Case 1 |
Case 2 |
Case 3 |
$
RC({A_{1}})$ |
0.461 |
0.438 |
0.496 |
$
RC({A_{2}})$ |
0.826 |
0.839 |
0.823 |
$
RC({A_{3}})$ |
0.462 |
0.451 |
0.408 |
$
RC({A_{4}})$ |
0.796 |
0.778 |
0.800 |
Step 5: Rank the alternatives.
From Table
4, ranks of the alternatives are determined as follows:
The above shows that
$
{A_{2}}$ is the best alternative for all cases.
Step 6: End.
5.2 Example for INHFS
In order to demonstrate the proposed method for INHFS, we consider the same numerical example for SVNHFS but the rating values of the attributes are INHFS. The INHFS based decision matrix is presented in Table
5.
Step 1: Determine the weights of attributes.
Table 5
Intervalneutrosophic hesitant fuzzy decision matrix.
|
$
{C_{1}}$ |
$
{C_{2}}$ |
$
{C_{3}}$ |
$
{A_{1}}$ |
$
\left\{\begin{array}{c}\{[0.3,0.4],[0.4,0.5]\}\\ {} \{[0.1,0.2]\}\\ {} \{[0.3,0.4]\}\end{array}\right\}$ |
$
\left\{\begin{array}{c}\{[0.4,0.5],[0.5,0.6]\}\\ {} \{[0.2,0.3]\}\\ {} \{[0.3,0.3],[0.3,0.4]\}\end{array}\right\}$ |
$
\left\{\begin{array}{c}\{[0.3,0.5]\}\\ {} \{[0.2,0.3]\}\\ {} \{[0.1,0.2],[0.3,0.3]\}\end{array}\right\}$ |
$
{A_{2}}$ |
$
\left\{\begin{array}{c}\{[0.6,0.7]\}\\ {} \{[0.1,0.2]\}\\ {} \{[0.1,0.2],[0.2,0.3]\}\end{array}\right\}$ |
$
\left\{\begin{array}{c}\{[0.6,0.7]\}\\ {} \{[0.1,0.2]\}\\ {} \{[0.2,0.3]\}\end{array}\right\}$ |
$
\left\{\begin{array}{c}\{[0.6,0.7]\}\\ {} \{[0.1,0.2]\}\\ {} \{[0.1,0.2]\}\end{array}\right\}$ |
$
{A_{3}}$ |
$
\left\{\begin{array}{c}\{[0.3,0.4],[0.5,0.6]\}\\ {} \{[0.2,0.4]\}\\ {} \{[0.2,0.3]\}\end{array}\right\}$ |
$
\left\{\begin{array}{c}\{[0.6,0.7]\}\\ {} \{[0.0,0.1]\}\\ {} \{[0.2,0.3]\}\end{array}\right\}$ |
$
\left\{\begin{array}{c}\{[0.5,06]\}\\ {} \{[0.1,0.2],[0.2,0.3]\}\\ {} \{[0.2,0.3]\}\end{array}\right\}$ |
$
{A_{4}}$ |
$
\left\{\begin{array}{c}\{[0.7,0.8]\}\\ {} \{[0.0,0.1]\}\\ {} \{[0.1,0.2]\}\end{array}\right\}$ |
$
\left\{\begin{array}{c}\{[0.5,0.6]\}\\ {} \{[0.2,0.3]\}\\ {} \{[0.3,0.4]\}\end{array}\right\}$ |
$
\left\{\begin{array}{c}\{[0.2,0.3]\}\\ {} \{[0.1,0.2]\}\\ {} \{[0.4,0.5],[0.5,0.6]\}\end{array}\right\}$ |
Here, we consider completely known, partially known and completely unknown attribute weights in three cases.
Case 1: When the attribute weights are known in advance, let the weight vector be
Case 2: When the attribute weights are partially known, the weight information is as follows:
Now, with the help of Model-3, we consider the following optimization problem:
Solving this problem with optimization software LINGO 11, we get the optimal weight vector as
Case 3: When the attribute weights are completely unknown, using Model-2 and Eqs. (
39) and (
40), we obtain the following weight vector:
Step 2: Determine the positive ideal alternative and the negative ideal alternative.
In this step, we calculate the positive and the negative ideal solutions, where the positive ideal solution is the best solution and the negative ideal solution is the worst solution. From Eqs. (
22) and (
23), we get
Step 3: Determine the distance measure from the ideal alternatives to each alternative.
In this step, using Eqs. (
43) and (
44), we determine the distance measure from the positive ideal solution and the negative ideal solution as given in Tables
6 and
7, respectively.
Table 6
Distance measure from the positive ideal solution.
$
{\tilde{D}^{+}}({A_{i}})$ |
Case 1 |
Case 2 |
Case 3 |
$
{\tilde{D}_{1}^{+}}$ |
0.164 |
0.168 |
0.167 |
$
{\tilde{D}_{2}^{+}}$ |
0.032 |
0.035 |
0.037 |
$
{\tilde{D}_{3}^{+}}$ |
0.102 |
0.113 |
0.104 |
$
{\tilde{D}_{4}^{+}}$ |
0.146 |
0.139 |
0.129 |
Step 4: Determine the relative closeness coefficient.
Table 7
Distance measure from the negative ideal solution.
$
{\tilde{D}^{-}}({A_{i}})$ |
Case 1 |
Case 2 |
Case 3 |
$
{\tilde{D}_{1}^{-}}$ |
0.078 |
0.079 |
0.063 |
$
{\tilde{D}_{2}^{-}}$ |
0.179 |
0.180 |
0.168 |
$
{\tilde{D}_{3}^{-}}$ |
0.155 |
0.153 |
0.148 |
$
{\tilde{D}_{4}^{-}}$ |
0.071 |
0.080 |
0.082 |
We now calculate the relative closeness coefficient from Eq. (
45). The results are shown in Table
8.
Step 5: Rank the alternatives.
Table 8
Relative closeness coefficient.
$
\tilde{RC({A_{i}})}$ |
Case 1 |
Case 2 |
Case 3 |
$
\tilde{RC({A_{1}})}$ |
0.322 |
0.312 |
0.273 |
$
\tilde{RC({A_{2}})}$ |
0.848 |
0.837 |
0.819 |
$
\tilde{RC({A_{3}})}$ |
0.603 |
0.576 |
0.587 |
$
\tilde{RC({A_{4}})}$ |
0.327 |
0.365 |
0.389 |
From Table
8, we obtain the ranks of the alternatives as follows:
The above shows that
$
{A_{2}}$ is the best alternative for all cases.
Step 6: End.
5.3 Comparative Analysis and Discussion
We divide this section into two parts. Firstly, we compare our proposed method with the existing methods for multi-attribute decision making under SVNHFS and then for INHFS.
5.3.1 SVNHFS
Ye (
2015a) developed the method to find out the best alternative under single valued neutrosophic hesitant fuzzy environment, and Sahin and Liu (
2017) proposed correlation coefficient of single valued neutrosophic hesitant fuzzy set for MADM. Rankings of the alternatives of the above existing method and our proposed method are shown in Table
9. When the attribute weights are known in advance, three methods result in the same ranking. However, when the attribute weights are partially known or completely unknown, the above two methods are not applicable.
Table 9
A comparison of the results under SVNHFS.
Methods |
Type of weight information |
Ranking result |
Ye ’s (2015a) method |
Completely known |
$
{A_{2}}\succ {A_{4}}\succ {A_{3}}\succ {A_{1}}$ |
Sahin and Liu ’s (2017) method |
Completely known |
$
{A_{2}}\succ {A_{4}}\succ {A_{3}}\succ {A_{1}}$ |
Proposed method |
Completely known |
$
{A_{2}}\succ {A_{4}}\succ {A_{3}}\succ {A_{1}}$ |
Ye ’s (2015b) method |
Partially known |
Not applicable |
Sahin and Liu ’s (2017) method |
Partially known |
Not applicable |
Proposed method |
Partially known |
$
{A_{2}}\succ {A_{4}}\succ {A_{3}}\succ {A_{1}}$ |
Ye ’s (2015b) method |
Completely unknown |
Not applicable |
Sahin and Liu ’s (2017) method |
Completely unknown |
Not applicable |
Proposed method |
Completely unknown |
$
{A_{2}}\succ {A_{4}}\succ {A_{1}}\succ {A_{3}}$ |
5.3.2 INHFS
Liu and Shi (
2015) proposed MADM method for the best alternative under interval neutrosophic hesitant fuzzy environment. Table
10 shows a comparison between Liu and Shi ’s (
2015) method and our proposed method.
Table 10
A comparison of the results under INHFS.
Methods |
Type of weight information |
Ranking result |
Liu and Shi’s method (2015) |
Completely known |
$
{A_{2}}\succ {A_{3}}\succ {A_{4}}\succ {A_{1}}$ |
Proposed method |
Completely known |
$
{A_{2}}\succ {A_{3}}\succ {A_{4}}\succ {A_{1}}$ |
Liu and Shi’s method (2015) |
Partially known |
Not applicable |
Proposed method |
Partially known |
$
{A_{2}}\succ {A_{3}}\succ {A_{4}}\succ {A_{1}}$ |
Liu and Shi’s method (2015) |
Completely unknown |
Not applicable |
Proposed method |
Completely unknown |
$
{A_{2}}\succ {A_{3}}\succ {A_{4}}\succ {A_{1}}$ |
The advantages of the proposed method for SVNHFS and INHFS are as follows:
-
• The existing methods are developed based on aggregation operator, correlation coefficient and hybrid weighted operator, but our proposed method is developed on the basis of deviation method.
-
• The proposed method offers more flexible choice of weight information because it is also applicable to partially known and unknown weight information.