Informatica logo


Login Register

  1. Home
  2. Issues
  3. Volume 30, Issue 2 (2019)
  4. NH-MADM Strategy in Neutrosophic Hesitan ...

Informatica

Information Submit your article For Referees Help ATTENTION!
  • Article info
  • Full article
  • Related articles
  • Cited by
  • More
    Article info Full article Related articles Cited by

NH-MADM Strategy in Neutrosophic Hesitant Fuzzy Set Environment Based on Extended GRA
Volume 30, Issue 2 (2019), pp. 213–242
Pranab Biswas   Surapati Pramanik   Bibhas C. Giri  

Authors

 
Placeholder
https://doi.org/10.15388/Informatica.2019.204
Pub. online: 1 January 2019      Type: Research Article      Open accessOpen Access

Received
1 August 2018
Accepted
1 January 2019
Published
1 January 2019

Abstract

Neutrosophic hesitant fuzzy set (NHFS) is a convincing tool that deals with uncertain information. In this paper, we propose an NH-MADM strategy for solving MADM with NHFSs based on extended GRA. We assume that the information of attributes is partially known or completely unknown. We develop two models to determine the weights of attributes. Then we rank the alternatives based on the strategy. Further, we extend the strategy into MADM in interval neutrosophic hesitant fuzzy set environment which we call INH-MADM strategy. Finally, we provide two illustrative examples to show the validity and effectiveness of the proposed strategies.

1 Introduction

Multiple attribute decision making (MADM) is a process of finding the best alternative that has the highest degree of satisfaction from a finite set of alternatives characterized with multiple attributes. Because of uncertainty and vagueness of human thinking, decision makers often consider preference values in terms of fuzzy set (Zadeh, 1965), hesitant fuzzy set (Torra, 2010), intuitionistic fuzzy set (Atanassov, 1986), Pythagorean fuzzy set (Yager, 2014), etc. However, these sets cannot properly express incomplete, indeterminate and inconsistent type information which generally occurs in MADM under uncertain environment. Smarandache (1998) introduced neutrosophic set that can express incomplete, indeterminate and inconsistent type information with its three independent membership degrees: truth membership, indeterminacy membership, falsity membership. Since then, many researchers have successfully employed neutrosophic sets into MADM problem to develop several sophisticated strategies such as TOPSIS (Biswas et al., 2016a, 2018, 2019; Mondal et al., 2016; Ye, 2015a; Peng and Dai, 2018), AHP (Abdel-Basset et al., 2017, 2018a), COPRAS (Baušys et al., 2015; Şahin, 2019), DEMATEL (Abdel-Basset et al., 2018b), VIKOR (Baušys and Zavadskas, 2015; Liu and Zhang, 2015; Pramanik and Dalapati, 2018; Pramanik et al., 2018a, 2018b), MULTIMOORA (Stanujkic et al., 2017; Tian et al., 2017), TODIM (Ji et al., 2018a; Pramanik et al., 2017a, 2018c), WASPAS (Zavadskas et al., 2015; Nie et al., 2017), MAMVA (Zavadskas et al., 2017), ELECTREE (Peng et al., 2014; Zhang et al., 2016), similarity measure strategies (Pramanik et al., 2017b; Mondal et al., 2018a, 2018b; Ye, 2014a; Pramanik et al., 2017c, 2018d), aggregation operator strategies (Liu et al., 2016; Peng et al., 2015; Ye, 2014b; Ji et al., 2018b; Liu and Wang, 2016; Biswas et al., 2016c; Mondal et al., 2018c, 2019), cross entropy (Dalapati et al., 2017; Pramanik et al., 2018e, 2018f), etc.
Grey relational analysis (GRA) (Deng, 1989), a part of grey system theory, is another effective tool that has been successfully applied in solving a variety of MADM strategies (Zhang et al., 2005; Wei, 2010, 2011; Wei et al., 2011; Zhang and Liu, 2011; Pramanik and Mukhopadhyaya, 2011; Zhang et al., 2013). Recently, neutrosophic set has caught attention of the researchers for solving MADM using GRA strategy (Biswas et al., 2014a, 2014b; Pramanik and Mondal, 2015; Dey et al., 2016a, 2016b; Banerjee et al., 2017).
However, when decision makers are in doubt, they often hesitate to assign single value for rating alternatives, instead they prefer to assign a set of possible values. To deal with the issue, Torra (2010) introduced hesitant fuzzy set, which permits the membership degree of an element to a given set to be represented by the set of possible numerical values in $[0,1]$. This set, an extension of fuzzy set, is useful for handling uncertain information in MADM process. Xia and Xu (2011) proposed some aggregation operators for hesitant fuzzy information and applied these operators to solve MADM. Wei (2012) studied hesitant fuzzy MADM by developing some prioritized aggregation operators for hesitant fuzzy information. Xu and Zhang (2013) developed TOPSIS method for hesitant fuzzy MADM with incomplete weight information. Li (2014) extended the MULTIMOORA method for multiple criteria group decision making with hesitant fuzzy sets. Mu et al. (2015) investigated a novel aggregation principle for hesitant fuzzy elements.
In a hesitant fuzzy MADM, decision maker does not consider non-membership degree for rating alternatives. However, this degree is equally important to express imprecise information. Zhu et al. (2012) presented the idea of dual hesitant fuzzy set, in which membership degrees and non-membership degrees are in the form of sets of values in $[0,1]$. Ye (2014c) and Chen et al. (2014) proposed correlation method of dual hesitant fuzzy sets to solve MADM with hesitant fuzzy information. Singh (2017) defined some distance and similarity measures for dual hesitant fuzzy set and utilized these measures in MADM.
Dual hesitant fuzzy set cannot properly capture indeterminate information in a decision making situation. Because of the inherent neutrosophic nature of human preferences, the rating values of alternatives and/or weights of attributes involved in the MADM problems are generally uncertain, imprecise, incomplete and inconsistent. Ye (2015b) introduced single-valued neutrosophic hesitant fuzzy set (SVNHFS) by coordinating hesitant fuzzy set and single-valued neutrosophic set. SVNHFS is characterized by truth hesitancy, indeterminacy hesitancy and falsity-hesitancy membership functions which are independent in nature. Therefore, SVNHFS can express the three kinds of hesitancy information existing in MADM problems. Ye (2015b) developed single valued neutrosophic hesitant fuzzy weighted averaging and single valued neutrosophic hesitant fuzzy weighted geometric operators for SVNHFS information and applied these two operators in MADM problems. Şahin and Liu (2017) defined correlation co-efficient between SVNHFSs and used to MADM. Biswas et al. (2016a) proposed GRA strategy for solving MADM in SVNHFS environment. Wang and Li (2018) proposed generalized SVNHF prioritized aggregation operators to solve MADM problem. Li and Zhang (2018) developed SVNHF based choquet aggregation operators for solving MADM problems.
Liu and Shi (2015) introduced interval neutrosophic hesitant fuzzy set (INHFS) which consists of three membership hesitancy functions: the truth, the indeterminacy and the falsity. The three membership functions of an element to a given set are individually expressed by a set of interval values contained in $[0,1]$. Liu and Shi (2015) proposed hybrid weighted average operator for interval neutrosophic hesitant fuzzy set and utilized the operators in MADM. Ye (2016) put forward correlation coefficients of INHFSs and applied it to solve MADM problems. Biswas (2018) proposed an extended GRA strategy for solving MADM in neutrosophic hesitant fuzzy environment with incomplete weight information. We observe that the SVNHFS as well as INHFS can be considered in many practical MADM problems in which the decision maker has to make a decision in neutrosophic hesitant fuzzy environment.
Up until now very few number of studies exist in the literature about GRA strategy of MADM under SVNHFS and INHFS environment. Therefore, we have an opportunity to extend the traditional methods or to propose new methods regarding GRA to deal with MADM in neutrosophic hesitant fuzzy environment.
In this study, our objectives are as follows:
  • 1. To present the idea of MADM problem in SVNHFS and INHFS environments, where the preference values of alternatives are considered with either SVNHFSs or INHFSs and the weight information of attributes are assumed to be completely known, incompletely known, and completely unknown.
  • 2. To develop some optimization models for determining weight information of attributes, when attributes’ weights are incompletely known, or completely unknown.
  • 3. To propose GRA based strategy for handling MADM problem under SVNHFS and INHFS environments.
  • 4. Finally, to present illustrative examples, one for SVNHFS and other for IVNHFs to show the feasibility and effectiveness of the proposed strategies.
To do so, the paper is organized as follows: Section 2 presents some basic concepts related to single valued neutrosophic set, interval neutrosophic set, hesitant fuzzy set, SVNHFS and INHFS. In Section 3, we define score function, accuracy function, Hamming distance measure for SVNHFS and INHFS. We propose NH-MADM strategy in SVNHFS environment in Section 4 and INH-MADM strategy in INHFS environment in Section 5. In Section 6, we illustrate the proposed NH-MADM and INH-MADM strategies with two examples. Finally, in Section 7 we present some concluding remarks and future scope of research of the study.

2 Preliminaries

In this section, we review some basic definitions regarding single valued neutrosophic set, interval neutrosophic set, hesitant fuzzy set, SVNHFS and INHFS.

2.1 Neutrosophic Set

2.1.1 Single Valued Neutrosophic Set

Definition 1.
(See Wang et al., 2010.) A single valued neutrosophic set A in a universe of discourse $X=({x_{1}},{x_{2}},\dots ,{x_{n}})$ is defined by
\[ A=\big\{\big\langle {T_{A}}(x),{I_{A}}(x),{F_{A}}(x)\big\rangle \hspace{0.1667em}\big|\hspace{0.1667em}x\in X\big\},\]
where the functions ${T_{A}}(x)$, ${I_{A}}(x)$ and ${F_{A}}(x)$, respectively, denote the truth, indeterminacy and falsity membership functions of $x\in X$ to the set A, with the conditions
\[\begin{array}{l}\displaystyle 0\leqslant {T_{A}}(x)\leqslant 1,\hspace{1em}0\leqslant {I_{A}}(x)\leqslant 1,\hspace{1em}0\leqslant {F_{A}}(x)\leqslant 1,\\ {} \displaystyle 0\leqslant {T_{A}}(x)+{I_{A}}(x)+{F_{A}}(x)\leqslant 3.\end{array}\]

2.1.2 Interval Neutrosophic Set

Definition 2.
(See Wang et al., 2005.) Let X be a non-empty finite set. Let $D[0,1]$ be the set of all closed sub intervals of the unit interval $[0,1]$. An interval neutrosophic set $\tilde{A}$ in X is an object having the form:
\[ \tilde{A}=\big\{\big\langle x,{T_{\tilde{A}}}(x),{I_{\tilde{A}}}(x),{F_{\tilde{A}}}(x)\big\rangle \hspace{0.1667em}\big|\hspace{0.1667em}x\in X\big\},\]
where ${T_{\tilde{A}}}:X\to D[0,1]$, ${I_{\tilde{A}}}:X\to D[0,1]$, ${F_{\tilde{A}}}:X\to D[0,1]$ with the condition $0\leqslant {T_{\tilde{A}}}(x)+{I_{\tilde{A}}}(x)+{F_{\tilde{A}}}(x)\leqslant 3$ for any $x\in X$. The intervals ${T_{\tilde{A}}}(x)$, ${T_{\tilde{A}}}(x)$ and ${T_{\tilde{A}}}(x)$ denote, respectively, the degree of truth, the indeterminacy and the falsity membership degree of x to $\tilde{A}$. Then for each $x\in X$ the lower and the upper limit points of closed intervals ${T_{\tilde{A}}}(x)$, ${I_{\tilde{A}}}(x)$ and ${F_{\tilde{A}}}(x)$ are denoted by ${T_{\tilde{A}}^{L}}(x)$, ${T_{\tilde{A}}^{U}}(x)$, ${I_{\tilde{A}}^{L}}(x)$, ${I_{\tilde{A}}^{U}}(x)$, ${T_{\tilde{A}}^{L}}(x)$, ${T_{\tilde{A}}^{U}}(x)$, respectively. Thus $\tilde{A}$ can also be presented in the following form:
\[ \tilde{A}=\big\{\big\langle x,\big[{T_{\tilde{A}}^{L}}(x),{T_{\tilde{A}}^{U}}(x)\big],\big[{I_{\tilde{A}}^{L}}(x),{I_{\tilde{A}}^{U}}(x)\big],\big[{F_{\tilde{A}}^{L}}(x),{F_{\tilde{A}}^{U}}(x)\big]\big\rangle \hspace{0.1667em}\big|\hspace{0.1667em}x\in X\big\},\]
where $0\leqslant {T_{\tilde{A}}^{U}}(x)+{I_{\tilde{A}}^{U}}(x)+{F_{\tilde{A}}^{U}}(x)\leqslant 3$ for any $x\in X$. For convenience of notation, we consider that $\tilde{A}=\langle [{T_{\tilde{A}}^{L}},{T_{\tilde{A}}^{U}}],[{I_{\tilde{A}}^{L}},{I_{\tilde{A}}^{U}}],[{F_{\tilde{A}}^{L}},{F_{\tilde{A}}^{U}}]\rangle $ as an interval neutrosophic set, where $0\leqslant {T_{\tilde{A}}^{U}}+{I_{\tilde{A}}^{U}}+{F_{\tilde{A}}^{U}}\leqslant 3$ for any $x\in X$.

2.2 Hesitant Fuzzy Set

Definition 3.
(See Torra and Narukawa, 2009; Torra, 2010.) Let X be a universe of discourse. A hesitant fuzzy set on X is defined by
\[ F=\big\{\big\langle x,{h_{F}}(x)\big\rangle \hspace{0.1667em}\big|\hspace{0.1667em}x\in X\big\},\]
where ${h_{F}}(x)$, referred to as the hesitant fuzzy element, is a set of some values in $[0,1]$ denoting the possible membership degree of the element $x\in X$ to the set F.
Definition 4.
(See Chen et al., 2013.) Let X be a non-empty finite set. An interval hesitant fuzzy set on X is represented by
\[ E=\big\{\big\langle x,{\tilde{h}_{E}}(x)\big\rangle \hspace{0.1667em}\big|\hspace{0.1667em}x\in X\big\},\]
where ${\tilde{h}_{E}}(x)$ is a set of some different interval values in $[0,1]$, which denotes the possible membership degrees of the element $x\in X$ to the set E, ${\tilde{h}_{E}}(x)$ can be represented by an interval hesitant fuzzy element $\tilde{h}$ that is denoted by $\{\tilde{\gamma }|\tilde{\gamma }\in \tilde{h}\}$, where $\tilde{\gamma }=[{\gamma ^{L}},{\gamma ^{U}}]$ is an interval number.

2.3 Neutrosophic Hesitant Fuzzy Sets

Definition 5.
(See Ye, 2015b.) Let X be fixed set. Then a single valued neutrosophic hesitant fuzzy set n on X is defined as
(1)
\[ n=\big\{\big\langle x,t(x),i(x),f(x)\big\rangle \hspace{0.1667em}\big|\hspace{0.1667em}x\in X\big\},\]
in which $t(x)$, $i(x)$ and $f(x)$ represent three sets of some values in $[0,1]$, denoting, respectively, the possible truth, indeterminacy and falsity membership degrees of the element $x\in X$ to the set N. The membership degrees $t(x)$, $i(x)$ and $f(x)$ satisfy the following conditions:
\[ 0\leqslant \delta ,\gamma ,\eta \leqslant 1,\hspace{1em}0\leqslant {\delta ^{+}}+{\gamma ^{+}}+{\eta ^{+}}\leqslant 3,\]
where $\delta \in t(x)$, $\gamma \in i(x)$, $\eta \in f(x)$, ${\delta ^{+}}\in {t^{+}}(x)={\textstyle\bigcup _{\delta \in t(x)}}\max t(x)$, ${\gamma ^{+}}\in {i^{+}}(x)={\textstyle\bigcup _{\gamma \in t(x)}}\max i(x)$ and ${\eta ^{+}}\in {f^{+}}(x)={\textstyle\bigcup _{\eta \in f(x)}}\max f(x)$ for all $x\in X$.
For convenience, the triplet $n(x)=\langle t(x),i(x),f(x)\rangle $ is denoted by $n=\langle t,i,f\rangle $ which we call single-valued neutrosophic hesitant fuzzy element (SVNHFE).
Definition 6.
(See Liu and Shi, 2015.) Let X be a non-empty finite set, an interval neutrosophic hesitant fuzzy set on X is represented by
(2)
\[ \tilde{n}=\big\{\big\langle x,\tilde{t}(x),\tilde{i}(x),\tilde{f}(x)\big\rangle \hspace{0.1667em}\big|\hspace{0.1667em}x\in X\big\},\]
where $\tilde{t}(x)=\{\tilde{\gamma }|\tilde{\gamma }\in \tilde{t}(x)\}$, $\tilde{i}(x)=\{\tilde{\gamma }|\tilde{\gamma }\in \tilde{i}(x)\}$ and $\tilde{f}(x)=\{\tilde{\gamma }|\tilde{\gamma }\in \tilde{f}(x)\}$ are three sets of some interval values in real unit interval $[0,1]$, which denotes the possible truth, indeterminacy and falsity membership hesitant degrees of the element $x\in X$ to the set $\tilde{n}$.
The membership values satisfy the limits:
\[ \tilde{\gamma }=\big[{\gamma ^{L}},{\gamma ^{U}}\big]\subseteq [0,1],\hspace{1em}\tilde{\delta }=\big[{\delta ^{L}},{\delta ^{U}}\big]\subseteq [0,1],\hspace{1em}\tilde{\eta }=\big[{\eta ^{L}},{\eta ^{U}}\big]\subseteq [0,1]\]
and $0\leqslant \sup {\tilde{\gamma }^{+}}+\sup {\tilde{\delta }^{+}}+\sup {\tilde{\eta }^{+}}+\leqslant 3$, where ${\tilde{\gamma }^{+}}={\textstyle\bigcup _{\tilde{\gamma }\in \tilde{t}(x)}}\max \{\tilde{\gamma }\}$, ${\tilde{\delta }^{+}}={\textstyle\bigcup _{\tilde{\delta }\in \tilde{t}(x)}}\max \{\tilde{\delta }\}$ and ${\tilde{\eta }^{+}}={\textstyle\bigcup _{\tilde{\eta }\in \tilde{t}(x)}}\max \{\tilde{\eta }\}$.
For convenience, we represent the set $\tilde{n}=\{\tilde{t}(x),\tilde{i}(x),\tilde{f}(x)\}$ with the symbol $\tilde{n}=\{\tilde{t},\tilde{i},\tilde{f}\}$ and call it interval neutrosophic hesitant fuzzy element (INHFE).

3 Score Function, Accuracy Function and Distance Function of SVNHFEs

Definition 7.
(See Biswas et al., 2016b.) Let ${n_{i}}=\langle {t_{i}},{i_{i}},{f_{i}}\rangle $ be an SVNHFE and ${l_{t}}$, ${l_{i}}$ and ${l_{f}}$ are the number of elements in ${t_{i}}$, ${i_{i}}$, ${f_{i}}$, respectively. Then the score function $S({n_{i}})$, the accuracy function $A({n_{i}})$ and certainty function $C({n_{i}})$ of ${n_{i}}$ are defined as
  • 1.
    \[ \hspace{-19.91684pt}S({n_{i}})=\frac{1}{3}\bigg[2+\frac{1}{{l_{t}}}\sum \limits_{\gamma \in t}\gamma -\frac{1}{{l_{i}}}\sum \limits_{\delta \in i}\delta -\frac{1}{{l_{f}}}\sum \limits_{\eta \in f}\eta \bigg];\]
  • 2.
    \[ \hspace{-19.91684pt}A({n_{i}})=\frac{1}{{l_{t}}}\sum \limits_{\gamma \in t}\gamma -\frac{1}{{l_{f}}}\sum \limits_{\eta \in f}\eta ;\]
  • 3.
    \[ \hspace{-19.91684pt}C({n_{i}})=\frac{1}{{l_{t}}}\sum \limits_{\gamma \in t}\gamma .\]
Definition 8.
Let ${n_{1}}=\langle {t_{1}},{i_{1}},{f_{1}}\rangle $ and ${n_{2}}=\langle {t_{2}},{i_{2}},{f_{2}}\rangle $ be any two SVNHFEs. Then the following rules can be defined for comparison purposes:
  • 1. if $s({n_{1}})>s({n_{2}})$, then ${n_{1}}$ is greater than ${n_{2}}$, that is, ${n_{1}}$ is superior to ${n_{2}}$, denoted by ${n_{1}}\succ {n_{2}}$;
  • 2. if $s({n_{1}})=s({n_{2}})$ and $A({n_{1}})>A({n_{2}})$, then ${n_{1}}$ is greater than ${n_{2}}$, that is, ${n_{1}}$ is superior to ${n_{2}}$, denoted by ${n_{1}}\succ {n_{2}}$;
  • 3. if $s({n_{1}})=s({n_{2}})$ and $A({n_{1}})=A({n_{2}})$, and $C({n_{1}})>C({n_{2}})$, then ${n_{1}}$ is greater than ${n_{2}}$, that is, ${n_{1}}$ is superior to ${n_{2}}$, denoted by ${n_{1}}\succ {n_{2}}$;
  • 4. if $s({n_{1}})=s({n_{2}})$ and $A({n_{1}})=A({n_{2}})$, and $C({n_{1}})=C({n_{2}})$, then ${n_{1}}$ is equal to ${n_{2}}$, that is, ${n_{1}}$ is indifferent to ${n_{2}}$, denoted by ${n_{1}}\sim {n_{2}}$.
Definition 9.
Let ${\tilde{n}_{i}}=\langle {\tilde{t}_{i}},{\tilde{i}_{i}},{\tilde{f}_{i}}\rangle $ be an INHFE. Then the score function $S({\tilde{n}_{i}})$, the accuracy function $A({\tilde{n}_{i}})$ and certainty function $C({\tilde{n}_{i}})$ of ${\tilde{n}_{i}}$ are defined as follows:
  • 1.
    \[ \hspace{-19.91684pt}S({\tilde{n}_{i}})=\frac{1}{6}\bigg[4+\frac{1}{{l_{t}}}\sum \limits_{\gamma \in t}\big({\gamma ^{L}}+{\gamma ^{U}}\big)-\frac{1}{{l_{i}}}\sum \limits_{\delta \in i}\big({\delta ^{L}}+{\delta ^{U}}\big)-\frac{1}{{l_{f}}}\sum \limits_{\eta \in f}\big({\eta ^{L}}+{\eta ^{U}}\big)\bigg];\]
  • 2.
    \[ \hspace{-19.91684pt}A({\tilde{n}_{i}})=\frac{1}{2}\bigg[\frac{1}{{l_{t}}}\sum \limits_{\gamma \in t}\big({\gamma ^{L}}+{\gamma ^{U}}\big)-\frac{1}{{l_{f}}}\sum \limits_{\eta \in f}\big({\eta ^{L}}+{\eta ^{U}}\big)\bigg];\]
  • 3.
    \[ \hspace{-19.91684pt}C({\tilde{n}_{i}})=\frac{1}{2}\bigg[\frac{1}{{l_{t}}}\sum \limits_{\gamma \in t}\big({\gamma ^{L}}+{\gamma ^{U}}\big)\bigg].\]
Definition 10.
(See Biswas et al., 2016b.) Let ${n_{1}}=\langle {t_{1}},{i_{1}},{f_{1}}\rangle $ and ${n_{2}}=\langle {t_{2}},{i_{2}},{f_{2}}\rangle $ be any two SVNHFEs. Then the normalized Hamming distance between ${n_{1}}$ and ${n_{2}}$ is defined as
(3)
\[\begin{aligned}{}d({n_{1}},{n_{2}})=& \frac{1}{3}\bigg(\bigg|\frac{1}{{l_{{t_{1}}}}}\sum \limits_{{\gamma _{1}}\in {t_{1}}}{\gamma _{1}}-\frac{1}{{l_{{t_{2}}}}}\sum \limits_{{\gamma _{2}}\in {t_{2}}}{\gamma _{2}}\bigg|+\bigg|\frac{1}{{l_{{i_{1}}}}}\sum \limits_{{\delta _{1}}\in {i_{1}}}{\delta _{1}}-\frac{1}{{l_{{i_{2}}}}}\sum \limits_{{\delta _{2}}\in {i_{2}}}{\delta _{2}}\bigg|\\ {} & +\bigg|\frac{1}{{l_{{f_{1}}}}}\sum \limits_{{\eta _{1}}\in {f_{1}}}{\eta _{1}}-\frac{1}{{l_{{f_{2}}}}}\sum \limits_{{\eta _{2}}\in {f_{2}}}{\eta _{2}}\bigg|\bigg),\end{aligned}\]
where ${l_{{t_{k}}}}$, ${l_{{i_{k}}}}$ and ${l_{{f_{k}}}}$ are the number of possible membership values of ${t_{k}}$, ${i_{k}}$, ${f_{k}}$ for $k=1,2$, respectively.
The distance function $d({n_{1}},{n_{2}})$ of ${n_{1}}$ and ${n_{2}}$ satisfies the following properties:
  • 1. $0\leqslant d({n_{1}},{n_{2}})\leqslant 1$;
  • 2. $d({n_{1}},{n_{2}})=0$, if and only if ${n_{1}}={n_{2}}$;
  • 3. $d({n_{1}},{n_{2}})=d({n_{2}},{n_{1}})$;
  • 4. If ${n_{1}}\subseteq {n_{2}}\subseteq {n_{3}}$, then $d({n_{1}},{n_{2}})\leqslant d({n_{1}},{n_{3}})$ and $d({n_{2}},{n_{3}})\leqslant d({n_{1}},{n_{3}})$, where ${n_{3}}$ is an SVNHFE on X.
Definition 11.
Let ${\tilde{n}_{1}}=\langle {\tilde{t}_{1}},{\tilde{i}_{1}},{\tilde{f}_{1}}\rangle $ and ${\tilde{n}_{2}}=\langle {\tilde{t}_{2}},{\tilde{i}_{2}},{\tilde{f}_{2}}\rangle $ be any two INHFEs. Then the normalized Hamming distance between ${\tilde{n}_{1}}$ and ${\tilde{n}_{2}}$ is defined as follows:
(4)
\[ d({\tilde{n}_{1}},{\tilde{n}_{2}})=\frac{1}{6}\left(\begin{array}{l}\Big|\frac{1}{{l_{{t_{1}}}}}\textstyle\sum \limits_{{\gamma _{1}}\in {t_{1}}}{\gamma _{1}^{L}}-\frac{1}{{l_{{t_{2}}}}}\textstyle\sum \limits_{{\gamma _{2}}\in {t_{2}}}{\gamma _{2}^{L}}\Big|+\Big|\frac{1}{{l_{{t_{1}}}}}\textstyle\sum \limits_{{\gamma _{1}}\in {t_{1}}}{\gamma _{1}^{U}}-\frac{1}{{l_{{t_{2}}}}}\textstyle\sum \limits_{{\gamma _{2}}\in {t_{2}}}{\gamma _{2}^{U}}\Big|\\ {} \hspace{1em}+\Big|\frac{1}{{l_{{i_{1}}}}}\textstyle\sum \limits_{{\delta _{1}}\in {i_{1}}}{\delta _{1}^{L}}-\frac{1}{{l_{{i_{2}}}}}\textstyle\sum \limits_{{\delta _{2}}\in {i_{2}}}{\delta _{2}^{L}}\Big|+\Big|\frac{1}{{l_{{i_{1}}}}}\textstyle\sum \limits_{{\delta _{1}}\in {i_{1}}}{\delta _{1}^{U}}-\frac{1}{{l_{{i_{2}}}}}\textstyle\sum \limits_{{\delta _{2}}\in {i_{2}}}{\delta _{2}^{U}}\Big|\\ {} \hspace{1em}+\Big|\frac{1}{{l_{{f_{1}}}}}\textstyle\sum \limits_{{\eta _{1}}\in {f_{1}}}{\eta _{1}^{L}}-\frac{1}{{l_{{f_{2}}}}}\textstyle\sum \limits_{{\eta _{2}}\in {f_{2}}}{\eta _{2}^{L}}\Big|+\Big|\frac{1}{{l_{{f_{1}}}}}\textstyle\sum \limits_{{\eta _{1}}\in {f_{1}}}{\eta _{1}^{U}}-\frac{1}{{l_{{f_{2}}}}}\textstyle\sum \limits_{{\eta _{2}}\in {f_{2}}}{\eta _{2}^{U}}\Big|\end{array}\right),\]
where ${l_{{\tilde{t}_{k}}}}$, ${l_{{\tilde{i}_{k}}}}$ and ${l_{{\tilde{f}_{k}}}}$ are the number of possible membership values ${\tilde{t}_{k}}$, ${\tilde{i}_{k}}$, ${\tilde{f}_{k}}$ for $k=1,2$, respectively.
The distance function $d({\tilde{n}_{1}},{\tilde{n}_{2}})$ of ${\tilde{n}_{1}}$ and ${\tilde{n}_{2}}$ satisfies the following properties:
  • 1. $0\leqslant d({\tilde{n}_{1}},{\tilde{n}_{2}})\leqslant 1$;
  • 2. $d({\tilde{n}_{1}},{\tilde{n}_{2}})=0$ if and only if ${\tilde{n}_{1}}={\tilde{n}_{2}}$;
  • 3. $d({\tilde{n}_{1}},{\tilde{n}_{2}})=d({\tilde{n}_{2}},{\tilde{n}_{1}})$;
  • 4. If ${\tilde{n}_{1}}\subseteq {\tilde{n}_{2}}\subseteq {\tilde{n}_{3}}$, ${\tilde{n}_{3}}$ is an INHFE on X, then $d({\tilde{n}_{1}},{\tilde{n}_{2}})\leqslant d({\tilde{n}_{1}},{\tilde{n}_{3}})$ and $d({\tilde{n}_{2}},{\tilde{n}_{3}})\leqslant d({\tilde{n}_{1}},{\tilde{n}_{3}})$.

4 GRA Strategy for MADM with SVNHFS

In this section, we propose GRA based strategy to find out the best alternative in MADM under SVNHFS environment. Assume that $A=\{{A_{1}},{A_{2}},\dots ,{A_{m}}\}$ be the discrete set of m alternatives and $C=\{{C_{1}},{C_{2}},\dots ,{C_{n}}\}$ be the set of n attributes for an SVNHFSs based MADM problem. Suppose that the rating value of the i-th alternative ${A_{i}}$ $(i=1,2,\dots ,m)$ over the attribute ${C_{j}}$ $(j=1,2,\dots ,n)$ is considered with SVNHFSs ${x_{ij}}=({t_{ij}},{i_{ij}},{f_{ij}})$, where ${t_{ij}}=\{{\gamma _{ij}}\mid {\gamma _{ij}}\in {t_{ij}},\hspace{0.1667em}0\leqslant {\gamma _{ij}}\leqslant 1\}$, ${i_{ij}}=\{{\delta _{ij}}\mid {\delta _{ij}}\in {i_{ij}},\hspace{0.1667em}0\leqslant {\delta _{ij}}\leqslant 1\}$ and ${f_{ij}}=\{{\eta _{ij}}\mid {\eta _{ij}}\in {f_{ij}},0\leqslant {\eta _{ij}}\leqslant 1\}$ indicate the possible truth, indeterminacy and falsity membership degrees of the rating ${x_{ij}}$ for $i=1,2,\dots ,m$ and $j=1,2,\dots ,n$. With these rating values, we construct a decision matrix $X={({x_{ij}})_{m\times n}}$, where the entries assume the form of SVNHFSs. The decision matrix is constructed as follows:
(5)
\[ X=\left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c}{x_{11}}& {x_{12}}& \cdots & {x_{1n}}\\ {} {x_{21}}& {x_{22}}& \cdots & {x_{2n}}\\ {} \vdots & \vdots & \ddots & \vdots \\ {} {x_{m1}}& {x_{m2}}& \cdots & {x_{mn}}\end{array}\right].\]
We now propose MADM based on GRA to determine the most desirable alternative under the following cases:
Case 1a. Completely known attribute weights.
The weight vector of attributes prescribed by the decision maker is $w=({w_{1}},{w_{2}},\dots ,{w_{n}})$, where ${w_{j}}\in [0,1]$ and ${\textstyle\sum _{j=1}^{n}}{w_{j}}=1$.
  • Step 1. Determine the single valued neutrosophic hesitant fuzzy positive ideal solution (SVNHFPIS) ${A^{+}}$ and single valued neutrosophic hesitant fuzzy negative ideal solution (SVNHFNIS) ${A^{-}}$ of alternatives in the matrix $X={({x_{ij}})_{m\times n}}$ by the following equations, respectively.
    • ∙ For benefit type attributes:
      (6)
      \[\begin{aligned}{}{A^{+}}=& \big({A_{1}^{+}},{A_{2}^{+}},\dots ,{A_{n}^{+}})\\ {} =& \Big\{\Big\langle \underset{i}{\max }\{{x_{i1}}\},\underset{i}{\max }\{{x_{i2}}\},\dots ,\underset{i}{\max }\{{x_{in}}\}\Big\rangle \Big\},\end{aligned}\]
      (7)
      \[\begin{aligned}{}{A^{-}}=& \big({A_{1}^{-}},{A_{2}^{-}},\dots ,{A_{n}^{-}})\\ {} =& \Big\{\Big\langle \underset{i}{\min }\{{x_{i1}}\},\underset{i}{\min }\{{x_{i2}}\},\dots ,\underset{i}{\min }\{{x_{in}}\}\Big\rangle \Big\}.\end{aligned}\]
    • ∙ For cost type attributes:
      (8)
      \[\begin{aligned}{}{A^{+}}=& \big({A_{1}^{+}},{A_{2}^{+}},\dots ,{A_{n}^{+}})\\ {} =& \Big\{\Big\langle \underset{i}{\min }\{{x_{i1}}\},\underset{i}{\min }\{{x_{i2}}\},\dots ,\underset{i}{\min }\{{x_{in}}\}\Big\rangle \Big\},\end{aligned}\]
      (9)
      \[\begin{aligned}{}{A^{-}}=& \big({A_{1}^{-}},{A_{2}^{-}},\dots ,{A_{n}^{-}})\\ {} =& \Big\{\Big\langle \underset{i}{\max }\{{x_{i1}}\},\underset{i}{\max }\{{x_{i2}}\},\dots ,\underset{i}{\max }\{{x_{in}}\}\Big\rangle \Big\}.\end{aligned}\]
    Using Definition 8, we compare attribute values ${x_{ij}}$ by using score, accuracy and certainty values of SVNHFEs.
  • Step 2. Determine the grey relational coefficient of each alternative from ${A^{+}}$ and ${A^{-}}$ by using the following equations:
    (10)
    \[ \hspace{-8.0pt}{\xi _{ij}^{+}}=\frac{{\min _{1\leqslant i\leqslant m}}{\min _{1\leqslant j\leqslant n}}D({x_{ij}},{A_{j}^{+}})+\rho {\max _{1\leqslant i\leqslant m}}{\max _{1\leqslant j\leqslant n}}D({x_{ij}},{A_{j}^{+}})}{D({x_{ij}},{A_{j}^{+}})+\rho {\max _{1\leqslant i\leqslant m}}{\max _{1\leqslant j\leqslant n}}D({x_{ij}},{A_{j}^{+}})},\]
    (11)
    \[ \hspace{-8.0pt}{\xi _{ij}^{-}}=\frac{{\min _{1\leqslant i\leqslant m}}{\min _{1\leqslant j\leqslant n}}D({x_{ij}},{A_{j}^{-}})+\rho {\max _{1\leqslant i\leqslant m}}{\max _{1\leqslant j\leqslant n}}D({x_{ij}},{A_{j}^{-}})}{D({x_{ij}},{A_{j}^{-}})+\rho {\max _{1\leqslant i\leqslant m}}{\max _{1\leqslant j\leqslant n}}D({x_{ij}},{A_{j}^{-}})},\]
    where the identification coefficient is generally set as $\rho =0.5$.
  • Step 3. Calculate the degree of grey relational coefficient of each alternative ${A_{i}}$ $(i=1,2,\dots ,m)$ from ${A^{+}}$ and ${A^{-}}$ by using Eq. (10) and Eq. (11), respectively, for $i=1,2,\dots ,m$:
    (12)
    \[\begin{aligned}{}{\xi _{i}^{+}}=& {\sum \limits_{j=1}^{n}}{w_{j}}{\xi _{ij}^{+}},\end{aligned}\]
    (13)
    \[\begin{aligned}{}{\xi _{i}^{-}}=& {\sum \limits_{j=1}^{n}}{w_{j}}{\xi _{ij}^{-}}.\end{aligned}\]
  • Step 4. Calculate the relative closeness coefficient ${\xi _{i}}$ for each alternative ${A_{i}}(i=1,2,\dots ,m)$ with respect to ${A^{+}}$ by the following equation:
    (14)
    \[ {\xi _{i}}=\frac{{\xi _{i}^{+}}}{{\xi _{i}^{+}}+{\xi _{i}^{-}}}\hspace{1em}\text{for}\hspace{2.5pt}i=1,2,\dots ,m.\]
  • Step 5. Rank the alternatives according to the descending order of relative closeness coefficient values of alternatives to determine the best one.
However, in a real decision making, the information about the attribute weights is often incompletely known or completely unknown due to decision makers’ limited expertise about the public domain. In this case, we propose some models for determining the weight vector of the attributes under the following cases:
Case 2a. Incompletely known attribute weights.
In this case, we have to determine the attribute weights to find out the best alternative. The incomplete attribute weight information H can be considered in the following form (Park et al., 2011; Park, 2004; Park et al., 1997):
  • 1. A weak ranking:$\{{w_{i}}\geqslant {w_{j}}\}$, $i\ne j$;
  • 2. A strict ranking:$\{{w_{i}}-{w_{j}}\geqslant {\epsilon _{i}}(>0)\}$, $i\ne j$;
  • 3. A ranking of difference:$\{{w_{i}}-{w_{j}}\geqslant {w_{k}}-{w_{p}}\}$, $i\ne j\ne k\ne p$;
  • 4. A ranking with multiples:$\{{w_{i}}\geqslant {\alpha _{i}}{w_{j}}$, $0\leqslant {\alpha _{i}}\leqslant 1,i\ne j$;
  • 5. An interval form:$\{{\beta _{i}}\leqslant {w_{i}}\leqslant {\beta _{i}}+{\epsilon _{i}}(>0)$, $0\leqslant {\beta _{i}}\leqslant {\beta _{i}}+{\epsilon _{i}}\leqslant 1$.
We can take the weights of attributes as a subset of the above relationships for a particular decision making problem. We denote the considered weight information set by H. The similarity measure of ${x_{ij}}$ to ${A_{j}^{+}}$ is defined as follows:
(15)
\[ S\big({x_{ij}},{A_{j}^{+}}\big)=1-\frac{D({x_{ij}},{A_{j}^{+}})}{{\textstyle\textstyle\sum _{j=1}^{n}}D({x_{ij}},{A_{j}^{+}})},\]
where $D({x_{ij}}$, ${A_{j}^{+}})$ $(i=1,2,\dots ,m)$ denotes the Hamming distance between ${x_{ij}}$ to ${A_{j}^{+}}$. Similarly, the similarity measure of ${x_{ij}}$ to ${A_{j}^{-}}$ is determined as follows:
(16)
\[ S({x_{ij}},{A_{j}^{-}})=1-\frac{D({x_{ij}},{A_{j}^{-}})}{{\textstyle\textstyle\sum _{j=1}^{n}}D({x_{ij}},{A_{j}^{-}})},\]
where $D({x_{ij}}$, ${A_{j}^{-}})$ $(i=1,2,\dots ,m)$ denotes the Hamming distance between ${x_{ij}}$ to ${A_{j}^{-}}$. Then the weighted similarity measures of the alternative ${A_{i}}$ $(i=1,2,\dots ,m)$ to ${A^{+}}=({A_{1}^{+}},{A_{2}^{+}},\dots ,{A_{n}^{+}})$ and ${A^{-}}=({A_{1}^{-}},{A_{2}^{-}},\dots ,{A_{n}^{-}})$ are respectively determined as follows:
(17)
\[\begin{aligned}{}{S_{i}^{+}}=& {\sum \limits_{j=1}^{n}}S\big({x_{ij}},{A_{j}^{+}}\big),\hspace{1em}i=1,2,\dots ,m,\end{aligned}\]
(18)
\[\begin{aligned}{}{S_{i}^{-}}=& {\sum \limits_{j=1}^{n}}S\big({x_{ij}},{A_{j}^{-}}\big),\hspace{1em}i=1,2,\dots ,m.\end{aligned}\]
An acceptable weight vector $w=({w_{1}},{w_{2}},\dots ,{w_{n}})$ should make all the similarities $({S_{1}^{+}},{S_{2}^{+}},\dots ,{S_{m}^{+}})$ to ideal solution $({A_{i1}^{+}},{A_{i2}^{+}},\dots ,{A_{in}^{+}})$ as large as possible and the similarities $({S_{1}^{-}},{S_{2}^{-}},\dots ,{S_{m}^{-}})$ to ideal solution $({A_{i1}^{-}},{A_{i2}^{-}},\dots ,{A_{in}^{-}})$ as small as possible under the condition $w\in H$. Therefore, we can set the following multiple objective non-linear optimization model to determine the weight vector:
(19)
\[ \text{Model-1}\hspace{2.5pt}\left\{\begin{array}{l}\max {S_{i}}(w)=\frac{{\textstyle\textstyle\sum _{j=1}^{n}}{w_{j}}S({x_{ij}},{A_{j}^{+}})}{{\textstyle\textstyle\sum _{j=1}^{n}}{w_{j}}S({x_{ij}},{A_{j}^{+}})+{\textstyle\textstyle\sum _{j=1}^{n}}{w_{j}}S({x_{ij}},{A_{j}^{-}})},\hspace{1em}i=1,2,\dots ,m\\ {} \text{subject to}\hspace{5pt}w\in H.\end{array}\right.\]
Since each alternative is non-inferior, there exists no preference relation among the alternatives. Then we can aggregate the above multiple objective optimization models with equal weights into the following single objective optimization model:
(20)
\[ \text{Model-2}\hspace{2.5pt}\left\{\begin{array}{l}\max S(w)={\textstyle\sum \limits_{i=1}^{m}}{S_{i}}(w)={\textstyle\sum \limits_{i=1}^{m}}\frac{{\textstyle\textstyle\sum _{j=1}^{n}}{w_{j}}S({x_{ij}},{A_{j}^{+}})}{{\textstyle\textstyle\sum _{j=1}^{n}}{w_{j}}S({x_{ij}},{A_{j}^{+}})+{\textstyle\textstyle\sum _{j=1}^{n}}{w_{j}}S({x_{ij}},{A_{j}^{-}})},\\ {} \text{subject to}\hspace{5pt}w\in H,\hspace{2.5pt}{\textstyle\sum \limits_{j=1}^{n}}{w_{j}}=1,\hspace{2.5pt}{w_{j}}\geqslant 0,\hspace{2.5pt}j=1,2,\dots ,n.\end{array}\right.\]
By solving the Model-2, we obtain the optimal solution $w=({w_{1}},{w_{2}},\dots ,{w_{n}})$ that can be used as the weight vector of the attributes. Using this weight vector and following Step 3 to Step 5, we can easily determine the relative closeness coefficient ${S_{i}}$ $(i=1,2,\dots ,m)$ of each alternative to find out an optimal alternative.
Case 3a. Completely unknown attribute weights.
Here we construct the following non-linear programming model to determine the weights of attributes.
(21)
\[ \text{Model-3}\hspace{2.5pt}\left\{\begin{array}{l@{\hskip4.0pt}l}\max {S_{i}^{+}}(w)={\textstyle\sum \limits_{j=1}^{n}}{w_{j}}S\big({x_{ij}},{A_{j}^{+}}\big),\hspace{1em}& i=1,2,\dots ,m;\\ {} \text{subject to}\hspace{2.5pt}{\textstyle\sum \limits_{j=1}^{n}}{w_{j}^{2}}=1,\hspace{1em}& {w_{j}}\geqslant 0,\hspace{2.5pt}j=1,2,\dots ,n.\end{array}\right.\]
We now aggregate the multiple objective optimization models (see Eq. (21)) to obtain the following single-objective optimization model:
(22)
\[ \text{Model-4}\hspace{2.5pt}\left\{\begin{array}{l}\max {S^{+}}(w)={\textstyle\sum \limits_{i=1}^{m}}{S_{i}^{+}}(w)={\textstyle\sum \limits_{i=1}^{m}}{\textstyle\sum \limits_{j=1}^{n}}{w_{j}}S\big({x_{ij}},{A_{j}^{+}}\big),\\ {} \text{subject to}\hspace{2.5pt}{\textstyle\sum \limits_{j=1}^{n}}{w_{j}^{2}}=1,\hspace{1em}{w_{j}}\geqslant 0,\hspace{2.5pt}j=1,2,\dots ,n.\end{array}\right.\]
To solve the Model-4, we consider the following Lagrange function:
(23)
\[ L(w,\phi )={\sum \limits_{i=1}^{m}}{\sum \limits_{j=1}^{n}}{w_{j}}S\big({x_{ij}},{A_{j}^{+}}\big)+\frac{\phi }{2}\bigg({\sum \limits_{j=1}^{n}}{w_{j}^{2}}-1\bigg),\]
where the real number ϕ is called the Lagrange multiplier. Differentiating partially L with respect to ${w_{j}}$ and ϕ, we have the following equations:
(24)
\[\begin{aligned}{}\frac{\partial L}{\partial {w_{j}}}=& {\sum \limits_{i=1}^{m}}S\big({x_{ij}},{A_{j}^{+}}\big)+\phi {w_{j}}=0,\end{aligned}\]
(25)
\[\begin{aligned}{}\frac{\partial L}{\partial \phi }=& \frac{1}{2}\bigg({\sum \limits_{j=1}^{n}}{w_{j}^{2}}-1\bigg)=0.\end{aligned}\]
It follows from Eq. (24) that
(26)
\[ {w_{j}}=\frac{-{\textstyle\textstyle\sum _{i=1}^{m}}S({x_{ij}},{A_{j}^{+}})}{\phi },\hspace{1em}i=1,2,\dots ,m.\]
Putting this value of ${w_{j}}$ in Eq. (25), we have
(27)
\[ \phi =-\sqrt{{\sum \limits_{j=1}^{n}}\bigg({\sum \limits_{i=1}^{m}}S{\big({x_{ij}},{A_{j}^{+}}\big)\bigg)^{2}}},\]
where $\phi <0$ and $\sqrt{{\textstyle\sum _{j=1}^{n}}{\big({\textstyle\sum _{i=1}^{m}}S({x_{ij}},{A_{j}^{+}})\big)^{2}}}$ implies the sum of similarities with respect to the j-th attribute. Combining Eq. (26) and Eq. (27), we obtain
(28)
\[ {w_{j}}=\frac{{\textstyle\textstyle\sum _{i=1}^{m}}S({x_{ij}},{A_{j}^{+}})}{\sqrt{{\textstyle\textstyle\sum _{j=1}^{n}}{\big({\textstyle\textstyle\sum _{i=1}^{m}}S({x_{ij}},{A_{j}^{+}})\big)^{2}}}},\hspace{1em}j=1,2,\dots ,n.\]
Then, by normalizing ${w_{j}}$ $(j=1,2,\dots ,n)$, we make their sum into a unit and obtain the normalized weight of the j-th attribute:
(29)
\[ {\bar{w}_{j}}=\frac{{w_{j}}}{{\textstyle\textstyle\sum _{j=1}^{n}}{w_{j}}},\]
and consequently, we obtain the weight vector of the attribute as $\bar{w}=({\bar{w}_{1}},{\bar{w}_{2}},\dots ,{\bar{w}_{n}})$.
Following Step 3 to Step 5, presented in the current section, we can easily determine the best alternative by using this weight vector.

5 GRA Strategy for MADM with INHFS

We consider a MADM problem in which $A=\{{A_{1}},{A_{2}},\dots ,{A_{m}}\}$ is the set of alternatives and $C=\{{C_{1}},{C_{2}},\dots ,{C_{n}}\}$ is the set of attributes. We assume that the rating value of the alternative ${A_{i}}$ $(i=1,2,\dots ,m)$ over the attribute ${C_{j}}$ $(j=1,2,\dots ,n)$ is represented by interval neutrosophic hesitant fuzzy element ${\tilde{n}_{ij}}=({\tilde{t}_{ij}},{\tilde{i}_{ij}},{\tilde{f}_{ij}})$, where ${\tilde{t}_{ij}}=\{{\tilde{\gamma }_{ij}}|{\tilde{\gamma }_{ij}}\in {\tilde{t}_{ij}}\}$, ${\tilde{i}_{ij}}=\{{\tilde{\delta }_{ij}}|{\tilde{\delta }_{ij}}\in {\tilde{i}_{ij}}\}$ and ${\tilde{f}_{ij}}=\{{\tilde{\eta }_{ij}}|{\tilde{\eta }_{ij}}\in {\tilde{f}_{ij}}\}$ are three sets of some interval values in the real unit interval $[0,1]$. These values denote the possible truth, indeterminacy and falsity membership hesitant degrees with the following limits: ${\tilde{\gamma }_{ij}}=[{\gamma _{ij}^{L}},{\gamma _{ij}^{U}}]\subseteq [0,1]$, ${\tilde{\delta }_{ij}}=[{\delta _{ij}^{L}},{\delta _{ij}^{U}}]\subseteq [0,1]$, ${\tilde{\eta }_{ij}}=[{\eta _{ij}^{L}},{\eta _{ij}^{U}}]\subseteq [0,1]$ and $0\leqslant \sup \tilde{\gamma }+\sup \tilde{\delta }+\sup \tilde{\eta }+\leqslant 3$, where ${\tilde{\gamma }^{+}}={\textstyle\bigcup _{\tilde{\gamma }\in \tilde{t}(x)}}\max \{{\gamma _{ij}^{U}}\}$, ${\tilde{\delta }^{+}}={\textstyle\bigcup _{\tilde{\delta }\in \tilde{i}(x)}}\max \{{\delta _{ij}^{U}}\}$, and ${\tilde{\eta }^{+}}={\textstyle\bigcup _{\tilde{\eta }\in \tilde{f}(x)}}\max \{{\eta _{ij}^{U}}\}$. Then we construct an interval neutrosophic hesitant fuzzy decision matrix N as
(30)
\[ N=\left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c}{\tilde{n}_{11}}& {\tilde{n}_{12}}& \cdots & {\tilde{n}_{1n}}\\ {} {\tilde{n}_{21}}& {\tilde{n}_{22}}& \cdots & {\tilde{n}_{2n}}\\ {} \vdots & \vdots & \ddots & \vdots \\ {} {\tilde{n}_{m1}}& {\tilde{n}_{m2}}& \cdots & {\tilde{n}_{mn}}\end{array}\right].\]
We solve the interval neutrosophic hesitant fuzzy MADM considering the following two cases:
Case 1b. Completely known attribute.
Assume that $\Lambda =({\lambda _{1}},{\lambda _{2}},\dots ,{\lambda _{n}})$ be the weight vector such that ${\lambda _{j}}\in [0,1]$ and ${\textstyle\sum _{j=1}^{n}}{\lambda _{j}}=1$.
We now consider the following steps required for the proposed strategy:
  • Step 1. Determine the interval neutrosophic hesitant fuzzy positive ideal solution (INHFPIS)${\tilde{A}^{+}}$ and interval neutrosophic hesitant fuzzy negative ideal solution (INHFNIS)${\tilde{A}^{-}}$ of alternatives from the decision matrix $N={({\tilde{n}_{ij}})_{m\times n}}$ with the equations:
    • ∙ For benefit type attributes:
      (31)
      \[\begin{aligned}{}{\tilde{A}^{+}}=& \big({\tilde{A}_{1}^{+}},{\tilde{A}_{2}^{+}},\dots ,{\tilde{A}_{n}^{+}}\big)\\ {} =& \Big\{\Big\langle \underset{i}{\max }\{{\tilde{n}_{i1}}\},\underset{i}{\max }\{{\tilde{n}_{i2}}\},\dots ,\underset{i}{\max }\{{\tilde{n}_{in}}\}\Big\rangle \Big\},\end{aligned}\]
      (32)
      \[\begin{aligned}{}{\tilde{A}^{-}}=& \big({\tilde{A}_{1}^{-}},{\tilde{A}_{2}^{-}},\dots ,{\tilde{A}_{n}^{-}}\big)\\ {} =& \Big\{\Big\langle \underset{i}{\min }\{{\tilde{n}_{i1}}\},\underset{i}{\min }\{{\tilde{n}_{i2}}\},\dots ,\underset{i}{\min }\{{\tilde{n}_{in}}\}\Big\rangle \Big\}.\end{aligned}\]
    • a) ∙ For cost type attributes:
      (33)
      \[\begin{aligned}{}{\tilde{A}^{+}}=& \big({\tilde{A}_{1}^{+}},{\tilde{A}_{2}^{+}},\dots ,{\tilde{A}_{n}^{+}}\big)\\ {} =& \Big\{\Big\langle \underset{i}{\min }\{{\tilde{n}_{i1}}\},\underset{i}{\min }\{{\tilde{n}_{i2}}\},\dots ,\underset{i}{\min }\{{\tilde{n}_{in}}\}\Big\rangle \Big\},\end{aligned}\]
      (34)
      \[\begin{aligned}{}{\tilde{A}^{-}}=& \big({\tilde{A}_{1}^{-}},{\tilde{A}_{2}^{-}},\dots ,{\tilde{A}_{n}^{-}}\big)\\ {} =& \Big\{\Big\langle \underset{i}{\max }\{{\tilde{n}_{i1}}\},\underset{i}{\max }\{{\tilde{n}_{i2}}\},\dots ,\underset{i}{\max }\{{\tilde{n}_{in}}\}\Big\rangle \Big\}.\end{aligned}\]
    We compare attribute values ${\tilde{n}_{ij}}$ by using score, accuracy and certainty values of INHFSs defined in Definition 9.
  • 1. Step 2. Calculate the grey relational coefficient of each alternative from ${\tilde{A}^{+}}$ and ${\tilde{A}^{-}}$ with the equations:
    (35)
    \[ {\chi _{ij}^{+}}=\frac{{\min _{1\leqslant i\leqslant m}}{\min _{1\leqslant j\leqslant n}}d({\tilde{n}_{ij}},{\tilde{A}_{j}^{+}})+\beta {\max _{1\leqslant i\leqslant m}}{\max _{1\leqslant j\leqslant n}}d({\tilde{n}_{ij}},{\tilde{A}_{j}^{+}})}{d({\tilde{n}_{ij}},{\tilde{A}_{j}^{+}})+\beta {\max _{1\leqslant i\leqslant m}}{\max _{1\leqslant j\leqslant n}}d({\tilde{n}_{ij}},{\tilde{A}^{+}})},\]
    (36)
    \[ {\chi _{ij}^{-}}=\frac{{\min _{1\leqslant i\leqslant m}}{\min _{1\leqslant j\leqslant n}}d({\tilde{n}_{ij}},{\tilde{A}_{j}^{-}})+\beta {\max _{1\leqslant i\leqslant m}}{\max _{1\leqslant j\leqslant n}}d({\tilde{n}_{ij}},{\tilde{A}_{j}^{-}})}{d({\tilde{n}_{ij}},{\tilde{A}_{j}^{-}})+\beta {\max _{1\leqslant i\leqslant m}}{\max _{1\leqslant j\leqslant n}}d({\tilde{n}_{ij}},{\tilde{A}_{j}^{-}})},\]
    where the identification coefficient $\beta =0.5$.
  • 2. Step 3. Calculate the degree of grey relational coefficient of each alternative ${A_{i}}$ $(i=1,2,\dots ,m)$ from ${\tilde{A}^{+}}$ and ${\tilde{A}^{-}}$ by using Eq. (10) and Eq. (11), respectively:
    (37)
    \[\begin{aligned}{}{\chi _{i}^{+}}=& {\sum \limits_{j=1}^{n}}{\lambda _{j}}{\chi _{ij}^{+}},\end{aligned}\]
    (38)
    \[\begin{aligned}{}{\chi _{i}^{-}}=& {\sum \limits_{j=1}^{n}}{\lambda _{j}}{\chi _{ij}^{-}}.\end{aligned}\]
  • Step 4. Determine the relative closeness coefficient ${\chi _{i}}$ for each alternative ${A_{i}}$ with respect to the positive ideal solution ${\tilde{A}^{+}}$:
    (39)
    \[ {\chi _{i}}=\frac{{\chi _{i}^{+}}}{{\chi _{i}^{+}}+{\chi _{i}^{-}}}\hspace{1em}\text{for}\hspace{2.5pt}i=1,2,\dots ,m.\]
  • 3. Step 5. Rank the alternatives according to the descending order of relative closeness coefficient values of alternatives and choose the best alternative.
Case 2b. Incompletely known attribute weights.
If the information about the attribute weights is incomplete, then we can follow the same procedures discussed in Case 2a of Section 4. Then we determine the similarity measure between ${\tilde{n}_{ij}}$ and ${\tilde{A}_{j}^{+}}$ as
(40)
\[ s\big({\tilde{n}_{ij}},{\tilde{A}_{j}^{+}}\big)=1-\frac{d({\tilde{n}_{ij}},{\tilde{A}_{j}^{+}})}{{\textstyle\textstyle\sum _{j=1}^{n}}d({\tilde{n}_{ij}},{\tilde{A}_{j}^{+}})},\]
where $d({\tilde{n}_{ij}},{\tilde{A}_{j}^{+}})$ $(i=1,2,\dots ,m)$ denotes the Hamming distance between ${\tilde{n}_{ij}}$ and ${\tilde{A}_{j}^{+}}$. Similarly, we obtain the similarity measure between ${\tilde{n}_{ij}}$ and ${\tilde{A}_{j}^{-}}$ as
(41)
\[ s\big({\tilde{n}_{ij}},{\tilde{A}_{j}^{-}}\big)=1-\frac{d({\tilde{n}_{ij}},{\tilde{A}_{j}^{-}})}{{\textstyle\textstyle\sum _{j=1}^{n}}d({\tilde{n}_{ij}},{\tilde{A}_{j}^{-}})},\]
where $d({\tilde{n}_{ij}}$, ${\tilde{A}_{j}^{-}})$ $(i=1,2,\dots ,m)$ denotes the Hamming distance between ${\tilde{n}_{ij}}$ to ${\tilde{A}_{j}^{-}}$. The weighted similarity measures of the alternative ${A_{i}}$ $(i=1,2,\dots ,m)$ to ${\tilde{A}^{+}}=({\tilde{A}_{1}^{+}},{\tilde{A}_{2}^{+}},\dots ,{\tilde{A}_{n}^{+}})$ and ${\tilde{A}^{-}}=({\tilde{A}_{1}^{-}},{\tilde{A}_{2}^{-}},\dots ,{\tilde{A}_{n}^{-}})$ are determined as follows:
(42)
\[\begin{aligned}{}{s_{i}^{+}}=& {\sum \limits_{j=1}^{n}}{\lambda _{j}}s\big({\tilde{n}_{ij}},{\tilde{A}_{j}^{+}}\big),\hspace{1em}i=1,2,\dots ,m,\end{aligned}\]
(43)
\[\begin{aligned}{}{s_{i}^{-}}=& {\sum \limits_{j=1}^{n}}{\lambda _{j}}s\big({\tilde{n}_{ij}},{\tilde{A}_{j}^{-}}\big),\hspace{1em}i=1,2,\dots ,m.\end{aligned}\]
We find a suitable weight vector $\Lambda =({\lambda _{1}},{\lambda _{2}},\dots ,{\lambda _{n}})$ that makes all the similarities $({s_{1}^{+}},{s_{2}^{+}},\dots ,{s_{m}^{+}})$ to ideal solution $({\tilde{A}_{1}^{+}},{\tilde{A}_{2}^{+}},\dots ,{\tilde{A}_{n}^{+}})$ as large as possible and the similarities $({s_{1}^{-}},{s_{2}^{-}},\dots ,{s_{m}^{-}})$ to ideal solution $({\tilde{A}_{1}^{-}},{\tilde{A}_{2}^{-}},\dots ,{\tilde{A}_{n}^{-}})$ as small as possible with the condition $\Lambda \in H$. Therefore, we can set the following multiple objective non-linear optimization model to determine the weight vector:
(44)
\[ \text{Model-5}\hspace{2.5pt}\left\{\begin{array}{l}\max {s_{i}}(\Lambda )=\frac{{\textstyle\textstyle\sum _{j=1}^{n}}{\lambda _{j}}s({\tilde{n}_{ij}},{\tilde{A}_{j}^{+}})}{{\textstyle\textstyle\sum _{j=1}^{n}}{\lambda _{j}}s({\tilde{n}_{ij}},{\tilde{A}_{j}^{+}})+{\textstyle\textstyle\sum _{j=1}^{n}}{\lambda _{j}}s({\tilde{n}_{ij}},{\tilde{A}_{j}^{-}})},\hspace{1em}i=1,2,\dots ,m,\\ {} \text{subject to}\hspace{2.5pt}\Lambda \in H.\end{array}\right.\]
Since each alternative is non-inferior, then we construct the following single objective optimization model by aggregating the above multiple objective optimization models with equal weights:
(45)
\[ \text{Model-6}\hspace{2.5pt}\left\{\begin{array}{l}\max s(\Lambda )={\textstyle\sum \limits_{i=1}^{m}}{s_{i}}(\Lambda )={\textstyle\sum \limits_{i=1}^{m}}\frac{{\textstyle\textstyle\sum _{j=1}^{n}}{\lambda _{j}}s({\tilde{n}_{ij}},{\tilde{A}_{j}^{+}})}{{\textstyle\textstyle\sum _{j=1}^{n}}{\lambda _{j}}s({\tilde{n}_{ij}},{\tilde{A}_{j}^{+}})+{\textstyle\textstyle\sum _{j=1}^{n}}{\lambda _{j}}s({\tilde{n}_{ij}},{\tilde{A}_{j}^{-}})},\\ {} \text{subject to}\hspace{2.5pt}\Lambda \in H,\hspace{2.5pt}{\textstyle\sum \limits_{j=1}^{n}}{\lambda _{j}}=1,\hspace{2.5pt}{\lambda _{j}}\geqslant 0,\hspace{2.5pt}j=1,2,\dots ,n.\end{array}\right.\]
Solving the Model-6, we have the optimal solution $\Lambda =({\lambda _{1}},{\lambda _{2}},\dots ,{\lambda _{n}})$ that can be used as the weight vector of the attributes. Then we follow Step 3 to Step 5 to determine an optimal alternative.
Case 3b. Completely unknown attribute weights.
In this case we develop the following non-linear programming model to determine the attribute weights.
(46)
\[ \text{Model-7}\hspace{2.5pt}\left\{\begin{array}{l@{\hskip4.0pt}l}\max {s_{i}^{+}}(\Lambda )={\textstyle\sum \limits_{j=1}^{n}}{\lambda _{j}}s\big({\tilde{n}_{ij}},{\tilde{A}_{j}^{+}}\big),\hspace{1em}& i=1,2,\dots ,m,\\ {} \text{subject to}\hspace{2.5pt}{\textstyle\sum \limits_{j=1}^{n}}{\lambda _{j}^{2}}=1,\hspace{1em}& {w_{j}}\geqslant 0,\hspace{2.5pt}j=1,2,\dots ,n.\end{array}\right.\]
Similarly, we can aggregate all above multiple objective optimization model:
(47)
\[ \text{Model-8}\hspace{2.5pt}\left\{\begin{array}{l}\max {s^{+}}(\Lambda )={\textstyle\sum \limits_{i=1}^{m}}{s_{i}^{+}}(\Lambda )={\textstyle\sum \limits_{i=1}^{m}}{\textstyle\sum \limits_{j=1}^{n}}{\lambda _{j}}s\big({\tilde{n}_{ij}},{\tilde{A}_{j}^{+}}\big),\\ {} \text{subject to}\hspace{2.5pt}{\textstyle\sum \limits_{j=1}^{n}}{w_{j}^{2}}=1,\hspace{1em}{\lambda _{j}}\geqslant 0,\hspace{2.5pt}j=1,2,\dots ,n.\end{array}\right.\]
To solve the Model-4, we consider the following Lagrange function:
(48)
\[ L(\Lambda ,\psi )={\sum \limits_{i=1}^{m}}{\sum \limits_{j=1}^{n}}{\lambda _{j}}s\big({\tilde{n}_{ij}},{\tilde{A}_{j}^{+}}\big)+\frac{\psi }{2}\bigg({\sum \limits_{j=1}^{n}}{\lambda _{j}^{2}}-1\bigg),\]
where the real number ψ is the Lagrange multiplier. Then differentiating partially L with respect to ${\lambda _{j}}$ and ψ, we have
(49)
\[\begin{aligned}{}\frac{\partial L}{\partial {\lambda _{j}}}=& {\sum \limits_{i=1}^{m}}s\big({\tilde{n}_{ij}},{\tilde{A}_{j}^{+}}\big)+\psi {\lambda _{j}}=0,\end{aligned}\]
(50)
\[\begin{aligned}{}\frac{\partial L}{\partial \psi }=& \frac{1}{2}\bigg({\sum \limits_{j=1}^{n}}{w_{j}^{2}}-1\bigg)=0.\end{aligned}\]
It follows from equation (49) that
(51)
\[ {\lambda _{j}}=\frac{-{\textstyle\textstyle\sum _{i=1}^{m}}s({\tilde{n}_{ij}},{\tilde{A}_{j}^{+}})}{\psi },\hspace{1em}i=1,2,\dots ,m.\]
Putting this value of ${\lambda _{j}}$ in equation (50), we have
(52)
\[ \psi =-\sqrt{{\sum \limits_{j=1}^{n}}\bigg({\sum \limits_{i=1}^{m}}s{\big({\tilde{n}_{ij}},{\tilde{A}_{j}^{+}}\big)\bigg)^{2}}},\]
where, $\psi <0$ and $\sqrt{{\textstyle\sum _{j=1}^{n}}{\big({\textstyle\sum _{i=1}^{m}}s({\tilde{n}_{ij}},{\tilde{A}_{j}^{+}})\big)^{2}}}$ implies the sum of similarities with respect to the j-th attribute. Combining Eqs. (26) and (27), we obtain
(53)
\[ {\lambda _{j}}=\frac{{\textstyle\textstyle\sum _{i=1}^{m}}s({\tilde{n}_{ij}},{\tilde{A}_{j}^{+}})}{\sqrt{{\textstyle\textstyle\sum _{j=1}^{n}}{\big({\textstyle\textstyle\sum _{i=1}^{m}}s({\tilde{n}_{ij}},{\tilde{A}_{j}^{+}})\big)^{2}}}},\hspace{1em}j=1,2,\dots ,n.\]
To normalize ${\lambda _{j}}$ $(j=1,2,\dots ,n)$, we make their sum into a unit and obtain the normalized weight of the j-th attribute:
(54)
\[ \bar{\lambda }=\frac{{\lambda _{j}}}{{\textstyle\textstyle\sum _{j=1}^{n}}{\lambda _{j}}}.\]
Consequently, we obtain the weight vector of the attribute as $\bar{\Lambda }=({\bar{\lambda }_{1}},{\bar{\lambda }_{2}},\dots ,{\bar{\lambda }_{n}})$. With this weight vector, we follow Step 3 to Step 5 to find out an optimal alternative.
We briefly present the steps of the proposed strategies in Fig. 1.
info1218_g001.jpg
Fig. 1
The schematic diagram of the proposed strategy.

6 Illustrative Examples

In this section, we consider two examples: one for MADM with single valued neutrosophic hesitant fuzzy sets and other for MADM with interval neutrosophic hesitant fuzzy sets.

6.1 Example 1

We consider an example, adapted from Ye (2015b), to illustrate the applicability of the extended GRA method for MADM with incomplete information. Assume that an investment company wants to invest a sum of money in the following four possible alternatives (companies):
  • • the car company $({A_{1}})$,
  • • the food company $({A_{2}})$,
  • • the computer company $({A_{3}})$,
  • • the arms company $({A_{4}})$.
While making a decision, the company considers the following attributes:
  • • the risk analysis $({C_{1}})$,
  • • the growth analysis $({C_{2}})$,
    Table 1
    Single valued neutrosophic hesitant fuzzy decision matrix.
    ${C_{1}}$ ${C_{2}}$ ${C_{3}}$
    ${A_{1}}$ $\{\{0.3,0.4,0.5\},\{0.1\},\{0.3,0.4\}\}$ $\{\{0.5,0.6\},\{0.2,0.3\},\{0.3,0.4\}\}$ $\{\{0.2,0.3\},\{0.1,0.2\},\{0.5,0.6\}\}$
    ${A_{2}}$ $\{\{0.6,0.7\},\{0.1,0.2\},\{0.2,0.3\}\}$ $\{\{0.6,0.7\},\{0.1\},\{0.3\}\}$ $\{\{0.6,0.7\},\{0.1,0.2\},\{0.1,0.2\}\}$
    ${A_{3}}$ $\{\{0.5,0.6\},\{0.4\},\{0.2,0.3\}\}$ $\{\{0.6\},\{0.3\},\{0.4\}\}$ $\{\{0.5,0.6\},\{0.1\},\{0.3\}\}$
    ${A_{4}}$ $\{\{0.7,0.8\},\{0.1\},\{0.1,0.2\}\}$ $\{\{0.6,0.7\},\{0.1\},\{0.2\}\}$ $\{\{0.3,0.5\},\{0.2\},\{0.1,0.2,0.3\}\}$
  • • the environmental impact analysis $({C_{3}})$.
We assume that the rating values of the alternatives ${A_{i}}$ $(i=1,2,3,4)$ are represented by neutrosophic hesitant fuzzy decision matrix ${R_{ij}}={({r_{ij}})_{4\times 3}}$ (see Table 1). The information of the attribute weights is incompletely known and the known weight information is given as follows:
(55)
\[ H=\left\{0.30\leqslant {w_{1}}\leqslant 0.40,0.20\leqslant {w_{2}}\leqslant 0.30,0.35\leqslant {w_{3}}\leqslant 0.45,{\sum \limits_{j=1}^{3}}{w_{j}}=1\right\}.\]
  • Step 1. Using Eqs. (6) and (9), we determine ${A^{+}}$ and ${A^{-}}$ by the equations:
    (56)
    \[\begin{aligned}{}{A^{+}}=& \big[{A_{1}^{+}},{A_{2}^{+}},{A_{3}^{+}}\big]\\ {} =& \left[\substack{\{\{0.7,0.8\},\{0.1\},\{0.1,0.2\}\},\{\{0.6,0.7\},\{0.1\},\{0.2\}\},\\ {} \{\{0.6,0.7\},\{0.1,0.2\},\{0.1,0.2\}\}}\right],\end{aligned}\]
    (57)
    \[\begin{aligned}{}{A^{-}}=& \big[{A_{1}^{-}},{A_{2}^{-}},{A_{3}^{-}}\big]\\ {} =& \left[\substack{\{\{0.5,0.6\},\{0.4\},\{0.2,0.3\}\},\{\{0.6\},\{0.3\},\{0.4\}\},\\ {} \{\{0.2,0.3\},\{0.1,0.2\},\{0.5,0.6\}\}}\right].\end{aligned}\]
  • Step 2. We assume ${d_{i}}(j)$ as the deviation sequence of ${A^{+}}$ and then the sequence of the rating values of the alternative ${A_{i}}$ is considered as follows:
    (58)
    \[ \hspace{-22.76228pt}{d_{i}}(j)=\big\{d\big({A_{1}^{+}},{r_{11}}\big),d\big({A_{2}^{+}},{r_{12}}\big),d\big({A_{3}^{+}},{r_{13}}\big)\big\}\hspace{1em}\text{for}\hspace{2.5pt}i=1,2,3,4;\hspace{2.5pt}j=1,2,3.\]
    We calculate the deviation sequences of alternatives ${A_{i}}$ $(i=1,2,3,4)$ from ${A^{+}}$ in Table 2.
    Table 2
    Distance of alternatives from SVNHFPIS.
    ${C_{1}}$ ${C_{2}}$ ${C_{3}}$ ${\min _{j}}{d_{i}}(j)$ ${\max _{j}}{d_{i}}(j)$
    ${d_{1}}(j)$ 0.1833 0.1333 0.2667 0.1333 0.2667
    ${d_{2}}(j)$ 0.0833 0.0333 0.0000 0.0000 0.1833
    ${d_{3}}(j)$ 0.2000 0.1500 0.1000 0.1000 0.2000
    ${d_{4}}(j)$ 0.0000 0.0000 0.1167 0.0000 0.1167
    ${\min _{j}}{\min _{j}}{d_{i}}(j)$ 0.0000
    ${\max _{j}}{\max _{j}}{d_{i}}(j)$ 0.2667
    Similarly, we obtain the deviation sequences of alternatives ${A_{i}}$ $(i=1,2,3,4)$ from ${A^{-}}$ in Table 3.
    Table 3
    Distance of alternatives from SVNHFNIS.
    ${C_{1}}$ ${C_{2}}$ ${C_{3}}$ ${\min _{j}}{D_{i}}(j)$ ${\max _{j}}{D_{i}}(j)$
    ${d_{1}}(j)$ 0.1833 0.0500 0.000 0.0000 0.1833
    ${d_{2}}(j)$ 0.1167 0.1167 0.2667 0.1167 0.2667
    ${d_{3}}(j)$ 0.0000 0.0000 0.2000 0.0000 0.2000
    ${d_{4}}(j)$ 0.2000 0.1500 0.1833 0.1500 0.2000
    ${\min _{j}}{\min _{j}}{d_{i}}(j)$ 0.0000
    ${\max _{j}}{\max _{j}}{d_{i}}(j)$ 0.2667
    Using Eq. (10), we determine grey relational coefficient ${\xi _{ij}^{+}}$ ($i=1,2,3,4$; $j=1,2,3$) of each alternative from ${A^{+}}$ as
    (59)
    \[ {\xi _{ij}^{+}}=\left(\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c}0.4210& 0.5000& 0.3332\\ {} 0.6154& 0.8001& 1.0000\\ {} 0.4000& 0.4705& 0.5714\\ {} 1.0000& 1.0000& 0.5332\end{array}\right).\]
    Similarly, following Eq. (11), we determine grey relational coefficient ${\xi _{ij}^{-}}$ of each alternative from ${A^{-}}$as
    (60)
    \[ {\xi _{ij}^{-}}=\left(\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c}0.4210& 0.7272& 1.0000\\ {} 0.5332& 0.5332& 0.3332\\ {} 1.0000& 1.0000& 0.4000\\ {} 0.4000& 0.4705& 0.4210\end{array}\right).\]
    Before going to next Step for solving the problem, we consider the following two cases:
  • Case a. Incompletely known attribute weights.
  • Step 3a. Using the model M-2 we present the following single-objective programming model:
    (61)
    \[ \left\{\begin{array}{l}\max \{\xi \}=\left[\substack{\frac{0.6857{w_{1}}+0.7715{w_{2}}+0.5428{w_{3}}}{0.9000{w_{1}}+1.57725{w_{2}}+1.5428{w_{3}}}\\ {} \frac{0.2856{w_{1}}+0.7145{w_{2}}+1.0000{w_{3}}}{1.0522{w_{1}}+1.48115{w_{2}}+1.4667{w_{3}}}\\ {} \frac{0.5556{w_{1}}+0.6667{w_{2}}+0.7778{w_{3}}}{1.5556{w_{1}}+1.66675{w_{2}}+0.7778{w_{3}}}\\ {} \frac{1.0000{w_{1}}+1.0000{w_{2}}+0.0000{w_{3}}}{1.6250{w_{1}}+1.71875{w_{2}}+0.6563{w_{3}}}}\right],\\ {} \text{subject to}\hspace{2.5pt}w\in H,\hspace{2.5pt}{\textstyle\sum \limits_{j=1}^{n}}{w_{j}}=1,\hspace{2.5pt}{w_{j}}\geqslant 0,\hspace{2.5pt}j=1,2,\dots ,n.\end{array}\right.\]
    We obtain the optimal weight vector $w=(0.35,0.20,0.45)$ by solving Eq. (61) with LINGO 13. Using Eqs. (12) and (13), we calculate the degree of grey relational coefficient of each alternative from ${A^{+}}$ and ${A^{-}}$, respectively, as
    (62)
    \[ {\xi _{1}^{+}}=0.3973,\hspace{1em}{\xi _{2}^{+}}=0.8260,\hspace{1em}{\xi _{3}^{+}}=0.4912,\hspace{1em}{\xi _{4}^{+}}=0.7902;\]
    (63)
    \[ {\xi _{1}^{-}}=0.7428,\hspace{1em}{\xi _{2}^{-}}=0.4437,\hspace{1em}{\xi _{3}^{-}}=0.7300,\hspace{1em}{\xi _{4}^{-}}=0.4235.\]
  • 1. Step 4a. Using Eq. (14), we determine the relative closeness coefficient ${\xi _{i}}$ $(i=1,2,3,4)$ for each alternative ${A_{i}}$ $(i=1,2,3,4)$ as
    (64)
    \[ {\xi _{1}}=0.3484,\hspace{1em}{\xi _{2}}=0.6508,\hspace{1em}{\xi _{3}}=0.4022,\hspace{1em}{\xi _{4}}=0.6510.\]
  • 2. Step 5a. According to the relative closeness coefficient of ${\xi _{i}}$ $(i=1,2,3,4)$, the ranking order of alternatives ${A_{i}}$ $(i=1,2,3,4)$ is ${\xi _{4}}\succ {\xi _{2}}\succ {\xi _{3}}\succ {\xi _{1}}$. The order indicates that ${A_{4}}$ is the best alternative.
  • Case b. Completely unknown attribute weights.
  • Step 3b. Using Eqs. (28) and (29), we obtain the normalized weight vector of the attributes as $w=\{0.3160,0.3940,0.2900\}$. The degree of grey relational coefficient of each alternative from ${A^{+}}$ and ${A^{-}}$ are
    (65)
    \[ {\xi _{1}^{+}}=0.4266,\hspace{1em}{\xi _{2}^{+}}=0.5387,\hspace{1em}{\xi _{3}^{+}}=0.4775,\hspace{1em}{\xi _{4}^{+}}=0.8648;\]
    (66)
    \[ {\xi _{1}^{-}}=0.7095,\hspace{1em}{\xi _{2}^{-}}=0.4752,\hspace{1em}{\xi _{3}^{-}}=0.8260,\hspace{1em}{\xi _{4}^{-}}=0.4339.\]
  • Step 4b. Using Eq. (14), we determine the relative closeness coefficient ${\xi _{i}}$ $(i=1,2,3,4)$ for each alternative ${A_{i}}$ $(i=1,2,3,4)$ as
    (67)
    \[ {\xi _{1}}=0.3755,\hspace{1em}{\xi _{2}}=0.5313,\hspace{1em}{\xi _{3}}=0.3663,\hspace{1em}{\xi _{4}}=0.6659.\]
  • 3. Step 5b. According to the relative closeness coefficient of ${\xi _{i}}$ $(i=1,2,3,4)$, the ranking order of alternatives is ${\xi _{4}}\succ {\xi _{2}}\succ {\xi _{3}}\succ {\xi _{1}}$. The order indicates that ${A_{4}}$ is the best alternative.
    Pairwise comparison of relative closeness coefficient of each alternative is presented in Fig. 2.
info1218_g002.jpg
Fig. 2
Pairwise comparison of each alternative.

6.2 Example 2

In this section, we consider the same problem presented in Example 1, but we assume that the rating values of the alternatives ${A_{i}}$ $(i=1,2,3,4)$ are expressed by interval neutrosophic hesitant fuzzy sets. The decision matrix ${U_{ij}}={({u_{ij}})_{4\times 3}}$ are presented in Table 4.
Table 4
Interval neutrosophic hesitant fuzzy decision matrix.
${C_{1}}$ ${C_{2}}$ ${C_{3}}$
${A_{1}}$ $\left\{\substack{\{[0.3,0.4],[0.4,0.5]\},\\ {} \{[0.1,0.2]\},\\ {} \{[0.3,0.4]\}}\right\}$ $\left\{\substack{\{[0.4,0.5],[0.5,0.6]\},\\ {} \{[0.2,0.3]\},\\ {} \{[0.3,0.3],[0.3,0.4]\}}\right\}$ $\left\{\substack{\{[0.3,0.5]\},\\ {} \{[0.2,0.3]\},\\ {} \{[0.1,0.2],[0.3,0.3]\}}\right\}$
${A_{2}}$ $\left\{\substack{\{[0.6,0.7]\},\\ {} \{[0.1,0.2]\},\\ {} \{[0.1,0.2],[0.2,0.3]\}}\right\}$ $\left\{\substack{\{[0.6,0.7]\},\\ {} \{[0.1,0.1]\},\\ {} \{[0.2,0.3]\}}\right\}$ $\left\{\substack{\{[0.6,0.7]\},\\ {} \{[0.1,0.2]\},\\ {} \{[0.1,0.2]\}}\right\}$
${A_{3}}$ $\left\{\substack{\{[0.3,0.4],[0.5,0.6]\},\\ {} \{[0.2,0.4]\},\\ {} \{[0.2,0.3]\}}\right\}$ $\left\{\substack{\{[0.6,0.7]\},\\ {} \{[0.0,0.1]\},\\ {} \{[0.2,0.2]\}}\right\}$ $\left\{\substack{\{[0.5,0.6]\},\\ {} \{[0.1,0.2],[0.2,0.3]\},\\ {} \{[0.2,0.3]\}}\right\}$
${A_{4}}$ $\left\{\substack{\{[0.7,0.8]\},\\ {} \{[0.0,0.1]\},\\ {} \{[0.1,0.2]\}}\right\}$ $\left\{\substack{\{[0.5,0.6]\},\\ {} \{[0.2,0.3]\},\\ {} \{[0.3,0.4]\}}\right\}$ $\left\{\substack{\{[0.2,0.3]\},\\ {} \{[0.1,0.2]\},\\ {} \{[0.4,0.5],[0.5,0.6]\}}\right\}$
The information of the attribute weights is partially known and the known weight information is given as follows:
(68)
\[ \Lambda =\Bigg\{0.30\leqslant {w_{1}}\leqslant 0.40,0.20\leqslant {w_{2}}\leqslant 0.30,\hspace{2.5pt}0.35\leqslant {w_{3}}\leqslant 0.45,{\sum \limits_{j=1}^{3}}{w_{j}}=1\Bigg\}.\]
We now consider the following steps to the problem.
  • Step 1. Similar to Example 1, we can determine positive ideal solution ${\tilde{A}^{+}}$ and the negative ideal solution ${\tilde{A}^{-}}$ by Eq. (31) and Eq. (32), respectively:
    (69)
    \[\begin{aligned}{}{\tilde{A}^{+}}=& \big[{\tilde{A}_{1}^{+}},{\tilde{A}_{2}^{+}},{\tilde{A}_{3}^{+}}\big]\\ {} =& \left[\substack{\{\{[0.7,0.8]\},\{[0.0,0.1]\},\{[0.1,0.2]\}\},\\ {} \{\{[0.6,0.7]\},\{[0.0,0.1]\},\{[0.2,0.2]\}\},\\ {} \{\{[0.6,0.7]\},\{[0.1,0.2]\},\{[0.1,0.2]\}\}}\right],\end{aligned}\]
    (70)
    \[\begin{aligned}{}{\tilde{A}^{-}}=& \big[{\tilde{A}_{1}^{-}},{\tilde{A}_{2}^{-}},{\tilde{A}_{3}^{-}}\big]\\ {} =& \left[\substack{\{\{[0.3,0.4],[0.4,0.5]\},\{[0.1,0.2]\},\{[0.3,0.4]\}\},\\ {} \{\{[0.4,0.5],[0.5,0.6]\},\{[0.2,0.3]\},\{[0.3,0.3],[0.3,0.4]\}\},\\ {} \{\{[0.2,0.3]\},\{[0.1,0.2]\},\{[0.4,0.5],[0.5,0.6]\}\}}\right].\end{aligned}\]
  • 1. Step 2. The deviation sequences of alternatives from ${\tilde{A}^{+}}$ and ${\tilde{A}^{-}}$ are presented Tables 5 and 6, respectively.
    Table 5
    Distance of alternatives from INHFPIS.
    ${C_{1}}$ ${C_{2}}$ ${C_{3}}$ ${\min _{j}}{d_{i}}(j)$ ${\max _{j}}{d_{i}}(j)$
    ${d_{1}}(j)$ 0.2167 0.1583 0.1417 0.1417 0.2500
    ${d_{2}}(j)$ 0.0833 0.0333 0.0000 0.0000 0.0833
    ${d_{3}}(j)$ 0.2167 0.0000 0.0833 0.0000 0.2167
    ${d_{4}}(j)$ 0.0000 0.1500 0.2500 0.0000 0.1417
    ${\min _{j}}{\min _{j}}{d_{i}}(j)$ 0.0000
    ${\max _{j}}{\max _{j}}{d_{i}}(j)$ 0.2500
    Table 6
    Distance of alternatives from INHFNIS.
    ${C_{1}}$ ${C_{2}}$ ${C_{3}}$ $\underset{j}{\min }{d_{i}}(j)$ $\underset{j}{\max }{d_{i}}(j)$
    ${d_{1}}(j)$ 0.0000 0.0000 0.1750 0.0000 0.1750
    ${d_{2}}(j)$ 0.1333 0.1250 0.2500 0.1250 0.2500
    ${d_{3}}(j)$ 0.1000 0.1583 0.2000 0.1000 0.2000
    ${d_{4}}(j)$ 0.2167 0.0250 0.0000 0.0000 0.2167
    ${\min _{j}}{\min _{j}}{d_{i}}(j)$ 0.0000
    ${\max _{j}}{\max _{j}}{d_{i}}(j)$ 0.2500
    Using Eq. (35), we determine grey relational coefficient ${\xi _{ij}^{+}}$ ($i=1,2,3,4$; $j=1,2,3$) of each alternative from ${\tilde{A}^{+}}$ as
    (71)
    \[ {\chi _{ij}^{+}}=\left(\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c}0.3611& 0.4363& 0.4637\\ {} 0.5952& 0.7863& 1.0000\\ {} 0.3611& 1.0000& 0.5952\\ {} 1.0000& 0.4495& 0.3288\end{array}\right).\]
    Similarly, we calculate grey relational coefficient ${\chi _{ij}^{+}}$ of each alternative from ${\tilde{A}^{-}}$ by using Eq. (36) as
    (72)
    \[ {\chi _{ij}^{-}}=\left(\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c}1.0000& 1.0000& 0.4118\\ {} 0.4789& 0.4949& 0.3288\\ {} 0.1091& 0.4362& 0.3798\\ {} 0.3611& 0.8305& 1.0000\end{array}\right).\]
  • Case A. Incompletely known attribute weights.
  • Step 3A. Following the model M-4, we obtain the single-objective programming model:
    (73)
    \[ \left\{\begin{array}{l}\max \{\chi \}=\left[\substack{\frac{0.5806{w_{1}}+0.6936{w_{2}}+0.7258{w_{3}}}{1.5806{w_{1}}+1.69365{w_{2}}+0.7258{w_{3}}}\\ {} \frac{0.2856{w_{1}}+0.7144{w_{2}}+{w_{3}}}{1.0233{w_{1}}+1.46855{w_{2}}+1.5080{w_{3}}}\\ {} \frac{0.2777{w_{1}}+{w_{2}}+0.7233{w_{3}}}{1.5905{w_{1}}+1.65465{w_{2}}+1.2859{w_{3}}}\\ {} \frac{{w_{1}}+0.6250{w_{2}}+0.3750{w_{3}}}{1.1034{w_{1}}+1.52165{w_{2}}+1.3750{w_{3}}}}\right],\\ {} \text{subject to}\hspace{2.5pt}w\in H,\hspace{2.5pt}{\textstyle\sum \limits_{j=1}^{n}}{w_{j}}=1,\hspace{2.5pt}{w_{j}}\geqslant 0,\hspace{2.5pt}j=1,2,\dots ,n.\end{array}\right.\]
    Solving Eq. (73), we obtain the optimal weight vector $w=(0.30,0.25,0.45)$. Using Eq. (37) and Eq. (38), we get the degree of grey relational coefficient of each alternative from neutrosophic hesitant fuzzy PIS ${A^{+}}$ and neutrosophic hesitant fuzzy NIS ${A^{-}}$, respectively:
    (74)
    \[ {\chi _{1}^{+}}=0.4261,\hspace{1em}{\chi _{2}^{+}}=0.8251,\hspace{1em}{\chi _{3}^{+}}=0.6261,\hspace{1em}{\chi _{4}^{+}}=0.5602;\]
    (75)
    \[ {\chi _{1}^{-}}=0.7353,\hspace{1em}{\chi _{2}^{-}}=0.4153,\hspace{1em}{\chi _{3}^{-}}=0.3127,\hspace{1em}{\chi _{4}^{-}}=0.7660.\]
  • Step 4A. Using Eq. (39), we calculate the relative closeness coefficient ${\chi _{i}}$ $(i=1,2,3,4)$ for each alternative ${A_{i}}$ $(i=1,2,3,4)$ with respect to INHFPIS as
    (76)
    \[ {\chi _{1}}=0.3755,\hspace{1em}{\chi _{2}}=0.6275,\hspace{1em}{\chi _{3}}=0.6669,\hspace{1em}{\chi _{4}}=0.4224.\]
  • Step 5A. We obtain the ranking order of alternatives ${A_{i}}$ $(i=1,2,3,4)$ according to the relative closeness coefficient of ${\chi _{i}}$ $(i=1,2,3,4)$ as
    \[ {\chi _{3}}\succ {\chi _{2}}\succ {\chi _{4}}\succ {\chi _{1}}.\]
    The ranking order indicates that ${A_{3}}$ is the best alternative.
  • Case B. Completely unknown attribute weights.
  • Step 3B. Using Eq. (53) and Eq. (54), we obtain the normalized weight vector of the attributes as $\Lambda =\{0.2680,0.3791,0.3529\}$. The degree of grey relational coefficient of each alternative from ${A^{+}}$ and ${A^{-}}$ are
    (77)
    \[ {\chi _{1}^{+}}=0.4258,\hspace{1em}{\chi _{2}^{+}}=0.8105,\hspace{1em}{\chi _{3}^{+}}=0.6859,\hspace{1em}{\chi _{4}^{+}}=0.5544;\]
    (78)
    \[ {\chi _{1}^{-}}=0.7924,\hspace{1em}{\chi _{2}^{-}}=0.4339,\hspace{1em}{\chi _{3}^{-}}=0.3286,\hspace{1em}{\chi _{4}^{-}}=0.7645.\]
  • 2. Step 4B. We determine the relative closeness coefficient ${\chi _{i}}$ $(i=1,2,3,4)$ for each alternative ${A_{i}}$ $(i=1,2,3,4)$ by using Eq. (14) as
    (79)
    \[ {\chi _{1}}=0.3495,\hspace{1em}{\chi _{2}}=0.6513,\hspace{1em}{\chi _{3}}=0.6761,\hspace{1em}{\chi _{4}}=0.4203.\]
  • Step 5B. According to the relative closeness coefficient of ${\chi _{i}}$ $(i=1,2,3,4)$, the ranking order of alternatives is ${\chi _{3}}\succ {\chi _{2}}\succ {\chi _{4}}\succ {\chi _{1}}$. The order indicates that ${A_{3}}$ is the best alternative.
    Following Case A and Case B, we compare relative closeness coefficient of each alternative pairwise in Fig. 3.
info1218_g003.jpg
Fig. 3
Pairwise comparison of each alternative.

6.3 Advantages of the Proposed Strategy

We now point out the advantages of our proposed strategy compared with the other existing strategies. First, we compare our proposed strategy with the Ye’s strategy (Ye, 2015b). We have considered the same MADM problem under SVNHF environment and obtained similar ranking result. The strategy (Ye, 2015b) considers two aggregation operators to solve MADM problem with known weight information of the attributes. Similarly, the strategies (Liu et al., 2016; Peng et al., 2015; Wang and Li, 2018; Li and Zhang, 2018) offer aggregation operator to solve MADM under SVNFS environment. All these strategies are suitable when the weight information of attributes is known in advance. Then we consider Liu and Shi’s strategy (Liu and Shi, 2015) which deals with two aggregation operator for solving MAGDM under INHFS environment, where the weights of attributes are completely known. On the other hand, our proposed strategies are comprehensive: they cover all the three cases, namely completely known, incompletely known, and completely unknown weight information of attributes. Therefore, our proposed approaches have a clear advantage over the existing strategy.
The main advantages of the paper are presented as follows:
  • 1. The proposed strategy is more convenient as the preference values of the alternatives are considered by either SVNHFSs or INHFSs.
  • 2. The proposed strategy is flexible because it can give the decision makers more choices for choosing the importance of attribute weights.
  • 3. The proposed strategy is presented in a logical way to make it simple and understandable.
  • 4. The proposed strategy does not consider any complex aggregation operators, it only considers relative closeness coefficient obtained from GRA to rank the alternatives.
  • 5. There is no information loss occurred in the proposed strategy due to transformation of neutrosophic hesitant fuzzy set based attribute values into crisp value or interval.

7 Concluding Remarks

Neutrosophic hesitant fuzzy set is a powerful mathematical tool for dealing with imprecise, indeterminate, and incomplete information existing in real MADM process. In this paper, we have extended grey relational analysis strategy for solving MADM in single-valued neutrosophic hesitant fuzzy environment. We have developed an optimization model to determine the weights of attributes, in which the weight information is incompletely known. We have also developed another model for determining attributes’ weights where the information is completely unknown. Then we have ranked the alternative, based on the proposed NH-MADM strategy. Further, we have extended the NH-MADM strategies to interval neutrosophic hesitant fuzzy environment. Finally, we have provided two illustrative examples, one for the case of SVNHFSs and other, for the case of IVNHFSs to show the validity and effectiveness of the proposed strategies. We hope that the proposed strategies can be applied in many real applications, where the information is neutrosophic hesitant in nature.

Acknowledgements

The authors are very grateful to the Editor-in-Chief, Prof. G. Dzemyda, and the anonymous reviewers for their insightful and constructive comments and suggestions that have led to an improved version of this paper.

References

 
Abdel-Basset, M., Mohamed, M., Zhou, Y., Hezam, I. (2017). Multi-criteria group decision making based on neutrosophic analytic hierarchy process. Journal of Intelligent & Fuzzy Systems, 33(6), 4055–4066.
 
Abdel-Basset, M., Mohamed, M., Smarandache, F. (2018a). An extension of neutrosophic AHP–SWOT analysis for strategic planning and decision-making. Symmetry, 10(4), 116. https://doi.org/10.3390/sym10040116.
 
Abdel-Basset, M., Manogaran, G., Gamal, A., Smarandache, F. (2018b). A hybrid approach of neutrosophic sets and DEMATEL method for developing supplier selection criteria. Design Automation for Embedded Systems, 22(3), 257–278.
 
Atanassov, K.T. (1986). Intuitionistic fuzzy sets. Fuzzy Sets and Systems, 20, 87–96.
 
Banerjee, D., Giri, B.C., Pramanik, S., Smarandache, F. (2017). GRA for multi attribute decision making in neutrosophic cubic set environment. Neutrosophic Sets and Systems, 15, 60–69.
 
Baušys, R., Zavadskas, E.K. (2015). Multicriteria decision making approach by VIKOR under interval neutrosophic set environment. Economic Computation & Economic Cybernetics Studies & Research (ECECSR), 49(4), 33–48.
 
Baušys, R., Zavadskas, E.K., Kaklauskas, A. (2015). Application of neutrosophic set to multicriteria decision making by COPRAS. Economic Computation and Economic Cybernetics Studies and Research (ECECSR), 49(2), 91–105.
 
Biswas, P. (2018). Multi-Attribute Decision Making in Neutrosophic Environment. PhD Thesis, Jadavpur University, Kolkata 700032, West Bengal.
 
Biswas, P., Pramanik, S., Giri, B.C. (2014a). Entropy based grey relational analysis method for multi-attribute decision-making under single valued neutrosophic assessments. Neutrosophic Sets and Systems, 2, 102–110.
 
Biswas, P., Pramanik, S., Giri, B.C. (2014b). A new methodology for neutrosophic multi-attribute decision making with unknown weight information. Neutrosophic Sets and Systems, 3, 42–52.
 
Biswas, P., Pramanik, S., Giri, B.C. (2016a). TOPSIS method for multi-attribute group decision-making under single-valued neutrosophic environment. Neural Computing and Applications, 27(3), 727–737.
 
Biswas, P., Pramanik, S., Giri, B.C. (2016b). GRA method of multiple attribute decision making with single valued neutrosophic hesitant fuzzy set information. In: Smarandache, F., Pramanik, S. (Eds.), New Trends in Neutrosophic Theory and Applications, Vol. II, pp. 55–63.
 
Biswas, P., Pramanik, S., Giri, B.C. (2016c). Aggregation of triangular fuzzy neutrosophic set information and its application to multi-attribute decision making. Neutrosophic Sets and Systems, 12, 20–40.
 
Biswas, P., Pramanik, S., Giri, B.C. (2018). TOPSIS strategy for multi-attribute decision making with trapezoidal neutrosophic numbers. Neutrosophic Sets and Systems, 19, 29–39.
 
Biswas, P., Pramanik, S., Giri, B.C. (2019). Neutrosophic TOPSIS with group decision making. In: Fuzzy Multi-criteria Decision-Making Using Neutrosophic Sets. Springer, Cham, pp. 543–585.
 
Chen, N., Xu, Z., Xia, M. (2013). Interval-valued hesitant preference relations and their applications to group decision making. Knowledge-Based Systems, 37, 528–540.
 
Chen, Y., Peng, X., Guan, G., Jiang, H. (2014). Approaches to multiple attribute decision making based on the correlation coefficient with dual hesitant fuzzy information. Journal of Intelligent & Fuzzy Systems, 26, 2547–2556.
 
Dalapati, S., Pramanik, S., Alam, S., Smarandache, S., Roy, T.K. (2017). In-cross entropy based MADGM strategy under interval neutrosophic set environment. Neutrosophic Sets and Systems, 18, 43–57.
 
Deng, J.L. (1989). Introduction to grey system theory. The Journal of Grey System, 1, 1–24.
 
Dey, P.P., Pramanik, S., Giri, B.C. (2016a). An extended grey relational analysis based multiple attribute decision making in interval neutrosophic uncertain linguistic setting. Neutrosophic Sets and Systems, 11, 21–30.
 
Dey, P.P., Pramanik, S., Giri, B.C. (2016b). Neutrosophic soft multi-attribute decision making based on grey relational projection method. Neutrosophic Sets and Systems, 11, 98–106.
 
Ji, P., Zhang, H.Y., Wang, J.Q. (2018a). A projection-based TODIM method under multi-valued neutrosophic environments and its application in personnel selection. Neural Computing and Applications, 29(1), 221–234.
 
Ji, P., Wang, J.Q., Zhang, H.Y. (2018b). Frank prioritized Bonferroni mean operator with single-valued neutrosophic sets and its application in selecting third-party logistics providers. Neural Computing and Applications, 30(3), 799–823.
 
Li, Z.H. (2014). An extension of the MULTIMOORA method for multiple criteria group decision making based upon hesitant fuzzy sets. Journal of Applied Mathematics, 2014, 527836. https://doi.org/10.1155/2014/527836. 16 pp.
 
Li, X., Zhang, X. (2018). Single-valued neutrosophic hesitant fuzzy choquet aggregation operators for multi-attribute decision making. Symmetry, 10(2), 50. https://doi.org/10.3390/sym10020050.
 
Liu, P., Shi, L. (2015). The generalized hybrid weighted average operator based on interval neutrosophic hesitant set and its application to multiple attribute decision making. Neural Computing and Applications, 26, 457–471.
 
Liu, P., Zhang, L. (2015). The extended VIKOR method for multiple criteria decision making problem based on neutrosophic hesitant fuzzy set. Preprint, arXiv:1512.0139.
 
Liu, P., Wang, Y. (2016). Interval neutrosophic prioritized OWA operator and its application to multiple attribute decision making. Journal of Systems Science and Complexity, 29(3), 681–697.
 
Liu, P., Zhang, L., Liu, X., Wang, P. (2016). Multi-valued neutrosophic number Bonferroni mean operators with their applications in multiple attribute group decision making. International Journal of Information Technology & Decision Making, 15(05), 1181–1210.
 
Mondal, K., Pramanik, S., Smarandache, F. (2016). Rough neutrosophic TOPSIS for multi-attribute group decision making. Neutrosophic Sets and Systems, 13, 105–117.
 
Mondal, K., Pramanik, S., Giri, B.C. (2018a). Single valued neutrosophic hyperbolic sine similarity measure based MADM strategy. Neutrosophic Sets and Systems, 20, 3–11.
 
Mondal, K., Pramanik, S., Giri, B.C. (2018b). Hybrid binary logarithm similarity measure for MAGDM problems under SVNS assessments. Neutrosophic Sets and Systems, 20, 12–25.
 
Mondal, K., Pramanik, S., Smarandache, F. (2018c). NN-harmonic mean aggregation operators-based MCGDM strategy in a neutrosophic number environment. Axioms, 7, 12. https://doi.org/10.3390/axioms7010012.
 
Mondal, K., Pramanik, S., Giri, B.C. (2019). Rough neutrosophic aggregation operators for multi-criteria decision-making. In: Fuzzy Multi-criteria Decision-Making Using Neutrosophic Sets. Springer, Cham, pp. 79–105.
 
Mu, Z., Zeng, S., Baležentis, T. (2015). A novel aggregation principle for hesitant fuzzy elements. Knowledge-Based Systems, 84, 134–143.
 
Nie, R.X., Wang, J.Q., Zhang, H.Y. (2017). Solving solar-wind power station location problem using an extended weighted aggregated sum product assessment (WASPAS) technique with interval neutrosophic sets. Symmetry, 9(7), 106. https://doi.org/10.3390/sym9070106.
 
Park, K.S. (2004). Mathematical programming models for characterizing dominance and potential optimality when multicriteria alternative values and weights are simultaneously incomplete. IEEE Transactions on Systems, Man, and Cybernetics – Part A: Systems and Humans, 34(5), 601–614. https://doi.org/10.1109/tsmca.2004.832828.
 
Park, K.S., Kim, S.H., Yoon, W.C. (1997). Establishing strict dominance between alternatives with special type of incomplete information. European Journal of Operational Research, 96, 398–406.
 
Park, J.H., Park, I.Y., Kwun, Y.C., Tan, X. (2011). Extension of the TOPSIS method for decision making problems under interval-valued intuitionistic fuzzy environment. Applied Mathematical Modelling, 35, 2544–2556.
 
Peng, X., Dai, J. (2018). Approaches to single-valued neutrosophic MADM based on MABAC, TOPSIS and new similarity measure with score function. Neural Computing and Applications, 29(10), 939–954.
 
Peng, J.J., Wang, J.Q., Zhang, H.Y., Chen, X.H. (2014). An outranking approach for multi-criteria decision-making problems with simplified neutrosophic sets. Applied Soft Computing, 25, 336–346.
 
Peng, J.J., Wang, J.Q., Wu, X.H., Wang, J., Chen, X.H. (2015). Multi-valued neutrosophic sets and power aggregation operators with their applications in multi-criteria group decision-making problems. International Journal of Computational Intelligence Systems, 8(2), 345–363.
 
Pramanik, S., Dalapati, S. (2018). A revisit to NC-VIKOR based MAGDM strategy in neutrosophic cubic set environment. Neutrosophic Sets and Systems, 21, 131–141.
 
Pramanik, S., Mukhopadhyaya, D. (2011). Grey relational analysis based intuitionistic fuzzy multi-criteria group decision-making approach for teacher selection in higher education. International Journal of Computer Applications, 34, 21–29.
 
Pramanik, S., Mondal, K. (2015). Interval neutrosophic multi-attribute decision-making based on grey relational analysis. Neutrosophic Sets and Systems, 9, 13–22.
 
Pramanik, S., Dalapati, S., Alam, S., Roy, T.K. (2017a). NC–TODIM-based MAGDM under a neutrosophic cubic set environment. Information, 8(4), 149. https://doi.org/10.3390/info8040149.
 
Pramanik, S., Biswas, P., Giri, B.C. (2017b). Hybrid vector similarity measures and their applications to multi-attribute decision making under neutrosophic environment. Neural Computing and Applications, 28(5), 1163–1176.
 
Pramanik, S., Dalapati, S., Alam, S., Roy, T.K. (2017c). Neutrosophic cubic MCGDM method based on similarity measure. Neutrosophic Sets and Systems, 16, 44–56.
 
Pramanik, S., Dalapati, S., Alam, Roy T. K, S. (2018a). NC-VIKOR based MAGDM strategy under neutrosophic cubic set environment. Neutrosophic Sets and Systems, 20, 95–108.
 
Pramanik, S., Dalapati, S., Alam, Roy T. K, S. (2018b). VIKOR based MAGDM strategy under bipolar neutrosophic set environment. Neutrosophic Sets and Systems, 19, 57–69.
 
Pramanik, S., Dalapati, S., Alam, Roy T. K, S. (2018c). TODIM method for group decision making under bipolar neutrosophic set environment. In: Smarandache, F., Pramanik, S. (Eds.), New Trends in Neutrosophic Theory and Applications, Vol. 2. Pons Editions, Brussels, pp. 140–155.
 
Pramanik, S., Roy, R., Roy, T.K., Smarandache, F. (2018d). Multi-attribute decision making based on several trigonometric hamming similarity measures under interval rough neutrosophic environment. Neutrosophic Sets and Systems, 19, 110–118.
 
Pramanik, S., Dalapati, S., Alam, S., Smarandache, S., Roy, T.K. (2018e). NS-cross entropy based MAGDM under single valued neutrosophic set environment. Information, 9(2), 37. https://doi.org/10.3390/info9020037.
 
Pramanik, S., Dalapati, S., Alam, S., Smarandache, S., Roy, T.K. (2018f). NC-cross entropy based MADM strategy in neutrosophic cubic set environment. Mathematics, 6(5), 67. https://doi.org/10.3390/math6050067.
 
Şahin, R. (2019). COPRAS method with neutrosophic sets. In: Fuzzy Multi-criteria Decision-Making Using Neutrosophic Sets. Springer, Cham, pp. 487–524.
 
Şahin, R., Liu, P. (2017). Correlation coefficient of single-valued neutrosophic hesitant fuzzy sets and its applications in decision making. Neural Computing and Applications, 28, 1387–1395.
 
Singh, P. (2017). Distance and similarity measures for multiple-attribute decision making with dual hesitant fuzzy sets. Computational and Applied Mathematics, 36, 111–126.
 
Smarandache, F. (1998). A Unifying Field in Logics. Neutrosophy: Neutrosophic Probability, Set and Logic. American Research Press, Rehoboth.
 
Stanujkic, D., Zavadskas, E.K., Smarandache, F., Brauers, W.K., Karabasevic, D. (2017). A neutrosophic extension of the MULTIMOORA method. Informatica, 28(1), 181–192.
 
Tian, Z.P., Wang, J., Wang, J.Q., Zhang, H.Y. (2017). An improved MULTIMOORA approach for multi-criteria decision-making based on interdependent inputs of simplified neutrosophic linguistic information. Neural Computing and Applications, 28(1), 585–597.
 
Torra, V. (2010). Hesitant fuzzy sets. International Journal of Intelligent Systems, 25(6), 529–539.
 
Torra, V., Narukawa, Y. (2009). On hesitant fuzzy sets and decision. In: IEEE International Conference on Fuzzy Systems, 2009, FUZZ-IEEE 2009. IEEE, pp. 1378–1382. https://doi.org/10.1109/FUZZY.2009.5276884.
 
Wang, R., Li, Y. (2018). Generalized single-valued neutrosophic hesitant fuzzy prioritized aggregation operators and their applications to multiple criteria decision-making. Information, 9(1), 10. https://doi.org/10.3390/info9010010.
 
Wang, H., Smarandache, F., Sunderraman, R., Zhang, Y.Q. (2005). Interval Neutrosophic Sets and Logic: Theory and Applications in Computing. Hexis, Arizona.
 
Wang, H., Smarandache, F., Sunderraman, R., Zhang, Y.Q. (2010). Single valued neutrosophic sets. Multispace and Multistructure, 4, 410–413.
 
Wei, G.W. (2010). GRA method for multiple attribute decision making with incomplete weight information in intuitionistic fuzzy setting. Knowledge–Based Systems, 23, 243–247.
 
Wei, G.W. (2011). Gray relational analysis method for intuitionistic fuzzy multiple attribute decision making. Expert Systems with Applications, 38, 11671–11677.
 
Wei, G. (2012). Hesitant fuzzy prioritized operators and their application to multiple attribute decision making. Knowledge–Based Systems, 31, 176–182.
 
Wei, G., Wang, H., Lin, R., Zhao, X. (2011). Grey relational analysis method for intuitionistic fuzzy multiple attribute decision making with preference information on alternatives. International Journal of Computational Intelligence Systems, 4, 164–173.
 
Xia, M., Xu, Z. (2011). Hesitant fuzzy information aggregation in decision making. International Journal of Approximate Reasoning, 52, 395–407.
 
Xu, Z., Zhang, X. (2013). Hesitant fuzzy multi-attribute decision making based on TOPSIS with incomplete weight information. Knowledge-Based Systems, 52, 53–64.
 
Yager, R.R. (2014). Pythagorean membership grades in multicriteria decision making. IEEE Transactions on Fuzzy Systems, 22(4), 958–965.
 
Ye, J. (2014a). Similarity measures between interval neutrosophic sets and their applications in multicriteria decision-making. Journal of Intelligent & Fuzzy Systems, 26(1), 165–172.
 
Ye, J. (2014b). A multicriteria decision-making method using aggregation operators for simplified neutrosophic sets. Journal of Intelligent & Fuzzy Systems, 26(5), 2459–2466.
 
Ye, J. (2014c). Correlation coefficient of dual hesitant fuzzy sets and its application to multiple attribute decision making. Applied Mathematical Modelling, 38, 659–666.
 
Ye, J. (2015a). An extended TOPSIS method for multiple attribute group decision making based on single valued neutrosophic linguistic numbers. Journal of Intelligent & Fuzzy Systems, 28(1), 247–255.
 
Ye, J. (2015b). Multiple-attribute decision-making method under a single-valued neutrosophic hesitant fuzzy environment. Journal of Intelligent Systems, 24, 23–36.
 
Ye, J. (2016). Correlation coefficients of interval neutrosophic hesitant fuzzy sets and its application in a multiple attribute decision making method. Informatica, 27(1), 179–202.
 
Zadeh, L.A. (1965). Fuzzy sets. Information and Control, 8(3), 338–355.
 
Zavadskas, E.K., Baušys, R., Lazauskas, M. (2015). Sustainable assessment of alternative sites for the construction of a waste incineration plant by applying WASPAS method with single-valued neutrosophic set. Sustainability, 7(12), 15923–15936. https://doi.org/10.3390/su71215792.
 
Zavadskas, E.K., Bausys, R., Kaklauskas, A., Ubarte, I., Kuzminske, A., Gudiene, N. (2017). Sustainable market valuation of buildings by the single-valued neutrosophic MAMVA method. Applied Soft Computing, 57, 74–87.
 
Zhang, S.F., Liu, S.Y. (2011). A GRA based intuitionistic fuzzy multi-criteria group decision making method for personnel selection. Expert Systems with Applications, 38, 11401–11405.
 
Zhang, X., Jin, F., Liu, P. (2013). A grey relational projection method for multi-attribute decision making based on intuitionistic trapezoidal fuzzy number. Applied Mathematical Modelling, 37, 3467–3477.
 
Zhang, J., Wu, D., Olson, D.L. (2005). The method of grey related analysis to multiple attribute decision making problems with interval numbers. Mathematical and Computer Modelling, 42, 991–998.
 
Zhang, H., Wang, J., Chen, X. (2016). An outranking approach for multi-criteria decision-making problems with interval-valued neutrosophic sets. Neural Computing and Applications, 27(3), 615–627.
 
Zhu, B., Xu, Z., Xia, M. (2012). Dual hesitant fuzzy sets. Journal of Applied Mathematics, 2012, 79629. https://doi.org/10.1155/2012/879629. 13 pp.

Biographies

Biswas Pranab
prabiswas.jdvu@gmail.com
paldam2010@gmail.com

P. Biswas obtained his bachelor degree in mathematics and master degree in applied mathematics from University of Kalyani, India. He obtained PhD in science from Jadavpur University, India. His research interests include multiple criteria decision making, aggregation operators, soft computing, optimization, fuzzy set, intuitionistic fuzzy set, neutrosophic set.

Pramanik Surapati
surapati.math@gmail.com

S. Pramanik obtained his BSc and MSc in mathematics from University of Kalyani, Kalyani, India. He received PhD in mathematics from Bengal Engineering and Science University (BESU), Shibpur, India. He is currently an assistant professor of mathematics at the Nandalal Ghosh B. T. College, Panpur, P.O.-Narayanpur, West Bengal, India. He has published more than 100 research papers in international peer reviewed journals. His research interests include optimization, multiple criteria decision making, soft computing, intuitionistic fuzzy set, neutrosophic set, mathematics education.

Giri Bibhas C.
bcgiri.jumath@gmail.com

B.C. Giri is a professor at the Department of Mathematics, Jadavpur University, Kolkata, India. He has published many high-level papers in many international peer-reviewed journals. His current research interests include supply chain management, inventory control theory, multiple criteria decision making, soft computing, optimization.


Reading mode PDF XML

Table of contents
  • 1 Introduction
  • 2 Preliminaries
  • 3 Score Function, Accuracy Function and Distance Function of SVNHFEs
  • 4 GRA Strategy for MADM with SVNHFS
  • 5 GRA Strategy for MADM with INHFS
  • 6 Illustrative Examples
  • 7 Concluding Remarks
  • Acknowledgements
  • References
  • Biographies

Copyright
© 2019 Vilnius University
by logo by logo
Open access article under the CC BY license.

Keywords
single-valued neutrosophic set hesitant fuzzy set single-valued neutrosophic hesitant fuzzy set interval neutrosophic hesitant fuzzy set multi-attribute decision making grey relational analysis

Metrics
since January 2020
1656

Article info
views

768

Full article
views

643

PDF
downloads

219

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Figures
    3
  • Tables
    6
info1218_g001.jpg
Fig. 1
The schematic diagram of the proposed strategy.
info1218_g002.jpg
Fig. 2
Pairwise comparison of each alternative.
info1218_g003.jpg
Fig. 3
Pairwise comparison of each alternative.
Table 1
Single valued neutrosophic hesitant fuzzy decision matrix.
Table 2
Distance of alternatives from SVNHFPIS.
Table 3
Distance of alternatives from SVNHFNIS.
Table 4
Interval neutrosophic hesitant fuzzy decision matrix.
Table 5
Distance of alternatives from INHFPIS.
Table 6
Distance of alternatives from INHFNIS.
info1218_g001.jpg
Fig. 1
The schematic diagram of the proposed strategy.
info1218_g002.jpg
Fig. 2
Pairwise comparison of each alternative.
info1218_g003.jpg
Fig. 3
Pairwise comparison of each alternative.
Table 1
Single valued neutrosophic hesitant fuzzy decision matrix.
${C_{1}}$ ${C_{2}}$ ${C_{3}}$
${A_{1}}$ $\{\{0.3,0.4,0.5\},\{0.1\},\{0.3,0.4\}\}$ $\{\{0.5,0.6\},\{0.2,0.3\},\{0.3,0.4\}\}$ $\{\{0.2,0.3\},\{0.1,0.2\},\{0.5,0.6\}\}$
${A_{2}}$ $\{\{0.6,0.7\},\{0.1,0.2\},\{0.2,0.3\}\}$ $\{\{0.6,0.7\},\{0.1\},\{0.3\}\}$ $\{\{0.6,0.7\},\{0.1,0.2\},\{0.1,0.2\}\}$
${A_{3}}$ $\{\{0.5,0.6\},\{0.4\},\{0.2,0.3\}\}$ $\{\{0.6\},\{0.3\},\{0.4\}\}$ $\{\{0.5,0.6\},\{0.1\},\{0.3\}\}$
${A_{4}}$ $\{\{0.7,0.8\},\{0.1\},\{0.1,0.2\}\}$ $\{\{0.6,0.7\},\{0.1\},\{0.2\}\}$ $\{\{0.3,0.5\},\{0.2\},\{0.1,0.2,0.3\}\}$
Table 2
Distance of alternatives from SVNHFPIS.
${C_{1}}$ ${C_{2}}$ ${C_{3}}$ ${\min _{j}}{d_{i}}(j)$ ${\max _{j}}{d_{i}}(j)$
${d_{1}}(j)$ 0.1833 0.1333 0.2667 0.1333 0.2667
${d_{2}}(j)$ 0.0833 0.0333 0.0000 0.0000 0.1833
${d_{3}}(j)$ 0.2000 0.1500 0.1000 0.1000 0.2000
${d_{4}}(j)$ 0.0000 0.0000 0.1167 0.0000 0.1167
${\min _{j}}{\min _{j}}{d_{i}}(j)$ 0.0000
${\max _{j}}{\max _{j}}{d_{i}}(j)$ 0.2667
Table 3
Distance of alternatives from SVNHFNIS.
${C_{1}}$ ${C_{2}}$ ${C_{3}}$ ${\min _{j}}{D_{i}}(j)$ ${\max _{j}}{D_{i}}(j)$
${d_{1}}(j)$ 0.1833 0.0500 0.000 0.0000 0.1833
${d_{2}}(j)$ 0.1167 0.1167 0.2667 0.1167 0.2667
${d_{3}}(j)$ 0.0000 0.0000 0.2000 0.0000 0.2000
${d_{4}}(j)$ 0.2000 0.1500 0.1833 0.1500 0.2000
${\min _{j}}{\min _{j}}{d_{i}}(j)$ 0.0000
${\max _{j}}{\max _{j}}{d_{i}}(j)$ 0.2667
Table 4
Interval neutrosophic hesitant fuzzy decision matrix.
${C_{1}}$ ${C_{2}}$ ${C_{3}}$
${A_{1}}$ $\left\{\substack{\{[0.3,0.4],[0.4,0.5]\},\\ {} \{[0.1,0.2]\},\\ {} \{[0.3,0.4]\}}\right\}$ $\left\{\substack{\{[0.4,0.5],[0.5,0.6]\},\\ {} \{[0.2,0.3]\},\\ {} \{[0.3,0.3],[0.3,0.4]\}}\right\}$ $\left\{\substack{\{[0.3,0.5]\},\\ {} \{[0.2,0.3]\},\\ {} \{[0.1,0.2],[0.3,0.3]\}}\right\}$
${A_{2}}$ $\left\{\substack{\{[0.6,0.7]\},\\ {} \{[0.1,0.2]\},\\ {} \{[0.1,0.2],[0.2,0.3]\}}\right\}$ $\left\{\substack{\{[0.6,0.7]\},\\ {} \{[0.1,0.1]\},\\ {} \{[0.2,0.3]\}}\right\}$ $\left\{\substack{\{[0.6,0.7]\},\\ {} \{[0.1,0.2]\},\\ {} \{[0.1,0.2]\}}\right\}$
${A_{3}}$ $\left\{\substack{\{[0.3,0.4],[0.5,0.6]\},\\ {} \{[0.2,0.4]\},\\ {} \{[0.2,0.3]\}}\right\}$ $\left\{\substack{\{[0.6,0.7]\},\\ {} \{[0.0,0.1]\},\\ {} \{[0.2,0.2]\}}\right\}$ $\left\{\substack{\{[0.5,0.6]\},\\ {} \{[0.1,0.2],[0.2,0.3]\},\\ {} \{[0.2,0.3]\}}\right\}$
${A_{4}}$ $\left\{\substack{\{[0.7,0.8]\},\\ {} \{[0.0,0.1]\},\\ {} \{[0.1,0.2]\}}\right\}$ $\left\{\substack{\{[0.5,0.6]\},\\ {} \{[0.2,0.3]\},\\ {} \{[0.3,0.4]\}}\right\}$ $\left\{\substack{\{[0.2,0.3]\},\\ {} \{[0.1,0.2]\},\\ {} \{[0.4,0.5],[0.5,0.6]\}}\right\}$
Table 5
Distance of alternatives from INHFPIS.
${C_{1}}$ ${C_{2}}$ ${C_{3}}$ ${\min _{j}}{d_{i}}(j)$ ${\max _{j}}{d_{i}}(j)$
${d_{1}}(j)$ 0.2167 0.1583 0.1417 0.1417 0.2500
${d_{2}}(j)$ 0.0833 0.0333 0.0000 0.0000 0.0833
${d_{3}}(j)$ 0.2167 0.0000 0.0833 0.0000 0.2167
${d_{4}}(j)$ 0.0000 0.1500 0.2500 0.0000 0.1417
${\min _{j}}{\min _{j}}{d_{i}}(j)$ 0.0000
${\max _{j}}{\max _{j}}{d_{i}}(j)$ 0.2500
Table 6
Distance of alternatives from INHFNIS.
${C_{1}}$ ${C_{2}}$ ${C_{3}}$ $\underset{j}{\min }{d_{i}}(j)$ $\underset{j}{\max }{d_{i}}(j)$
${d_{1}}(j)$ 0.0000 0.0000 0.1750 0.0000 0.1750
${d_{2}}(j)$ 0.1333 0.1250 0.2500 0.1250 0.2500
${d_{3}}(j)$ 0.1000 0.1583 0.2000 0.1000 0.2000
${d_{4}}(j)$ 0.2167 0.0250 0.0000 0.0000 0.2167
${\min _{j}}{\min _{j}}{d_{i}}(j)$ 0.0000
${\max _{j}}{\max _{j}}{d_{i}}(j)$ 0.2500

INFORMATICA

  • Online ISSN: 1822-8844
  • Print ISSN: 0868-4952
  • Copyright © 2023 Vilnius University

About

  • About journal

For contributors

  • OA Policy
  • Submit your article
  • Instructions for Referees
    •  

    •  

Contact us

  • Institute of Data Science and Digital Technologies
  • Vilnius University

    Akademijos St. 4

    08412 Vilnius, Lithuania

    Phone: (+370 5) 2109 338

    E-mail: informatica@mii.vu.lt

    https://informatica.vu.lt/journal/INFORMATICA
Powered by PubliMill  •  Privacy policy