Informatica logo


Login Register

  1. Home
  2. Issues
  3. Volume 31, Issue 1 (2020)
  4. TOPSIS Method for Neutrosophic Hesitant ...

Informatica

Information Submit your article For Referees Help ATTENTION!
  • Article info
  • Full article
  • Related articles
  • Cited by
  • More
    Article info Full article Related articles Cited by

TOPSIS Method for Neutrosophic Hesitant Fuzzy Multi-Attribute Decision Making
Volume 31, Issue 1 (2020), pp. 35–63
Bibhas C. Giri   Mahatab Uddin Molla   Pranab Biswas  

Authors

 
Placeholder
https://doi.org/10.15388/20-INFOR392
Pub. online: 23 March 2020      Type: Research Article      Open accessOpen Access

Received
1 May 2019
Accepted
1 September 2019
Published
23 March 2020

Abstract

Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) is a very common and useful method for solving multi-criteria decision making problems in certain and uncertain environments. Single valued neutrosophic hesitant fuzzy set (SVNHFS) and interval neutrosophic hesitant fuzzy set (INHFS) are developed on the integration of neutrosophic set and hesitant fuzzy set. In this paper, we extend TOPSIS method for multi-attribute decision making based on single valued neutrosophic hesitant fuzzy set and interval neutrosophic hesitant fuzzy set. Furthermore, we assume that the attribute weights are known, incompletely known or completely unknown. We establish two optimization models for SVNHFS and INHFS with the help of maximum deviation method. Finally, we provide two numerical examples to validate the proposed approach.

1 Introduction

Decision making is a popular field of study in the areas of Operations Research, Management Science, Medical Science, Data Mining, etc. Multi-attribute decision making (MADM) refers to making choice of an alternative from a finite set of alternatives. For solving MADM problem, there exist many well-known methods such as TOPSIS (Hwang and Yoon, 1981), VIKOR (Opricovic and Tzeng, 2004), PROMETHEE (Brans et al., 1986), ELECTRE (Roy, 1990), AHP (Satty, 1980), DEMATEL (Gabus and Fontela, 1972), MULTIMOORA (Brauers and Zavadskas, 2006, 2010), TODIM (Gomes and Lima, 1992a, 1992b), WASPAS (Zavadskas et al., 2014), COPRAS (Zavadskas et al., 1994), EDAS (Keshavarz Ghorabaee et al., 2015), MAMVA (Kanapeckiene et al., 2011), DNMA (Liao and Wu, 2019), etc. Wu and Liao (2019) developed consensus-based probabilistic linguistic gained and lost dominance score method for multi-criteria group decision making problem. Hafezalkotob et al. (2019) proposed an overview of MULTIMOORA for multi-criteria decision making for theory, developments, applications, and challenges. Mi et al. (2019) surveyed on integrations and applications of the best worst method in decision making. Among those methods, TOPSIS method has gained a lot of attention in the past decade and many researchers have applied the method for solving MADM problems in different environments (Zavadskas et al., 2016). The weight information and attribute value generally carry imprecise value for MADM in uncertain environment, which is effectively dealt with fuzzy sets (Zadeh, 1965), intuitionistic fuzzy sets (Atanassov, 1986), hesitant fuzzy sets (Torra, 2010), and neutrosophic sets (Smarandache, 1998). Chen (2000) introduced the TOPSIS method in fuzzy environment and considered the rating value of the alternative and attribute weight in terms of triangular fuzzy number. Boran et al. (2009) extended the TOPSIS method for multi-criteria group decision making under the intuitionistic fuzzy set to solve supplier selection problem. Ye (2010) extended the TOPSIS method with interval valued intuitionistic fuzzy number. Xu and Zhang (2013) proposed TOPSIS method for MADM under the hesitant fuzzy set with incomplete weight information. Fu and Liao (2019) developed TOPSIS method for multi-expert qualitative decision making involving green mine selection under unbalanced double hierarchy linguistic term set.
Neutrosophic set is a generalization of the fuzzy set, hesitant fuzzy set and intuitionistic fuzzy set. It has three membership functions – truth membership, falsity membership and indeterminacy membership functions. This set has been successfully applied in various decision making problems (Peng et al., 2014; Ye, 2014; Kahraman and Otay, 2019; Stanujkic et al., 2017). Biswas et al. (2016a) proposed TOPSIS method for multi-attribute group decision making under single valued neutrosophic environment. Biswas et al. (2019a) further extended TOPSIS method using non-linear programming approach to solve multi-attribute group decision making. Chi and Liu (2013) developed TOPSIS method based on interval neutrosophic set. Ye (2015a) extended the TOPSIS method for single valued linguistic neutrosophic number. Biswas et al. (2018) developed the TOPSIS method for single valued trapezoidal neutrosophic number and Giri et al. (2018) proposed the TOPSIS method for interval trapezoidal neutrosophic number by considering unknown attribute weight.
In decision making problem, decision makers may sometime hesitate to assign a single value for rating the alternatives due to doubt or incomplete information. Instead, they prefer to assign a set of possible values to represent the membership degree for any element to the set. To deal with the issue, Torra (2010) coined the idea of hesitant fuzzy set, which is a generalization of fuzzy set and intuitionistic fuzzy set. Until then, hesitant fuzzy set has been successfully applied in decision making problems (Xia and Xu, 2011; Rodriguez et al., 2012; Zhang and Wei, 2013). Xu and Xia (2011a, 2011b) proposed a variety of distance measures for hesitant fuzzy set. Wei (2012) introduced hesitant fuzzy prioritized operators for solving MADM problem. Beg and Rashid (2013) proposed TOPSIS method for MADM with hesitant fuzzy linguistic term set. Liao and Xu (2015) developed approaches to manage hesitant fuzzy linguistic information based on the cosine distance and similarity measures for HFLTSs and their application in qualitative decision making. Joshi and Kumar (2016) introduced Choquet integral based TOPSIS method for multi-criteria group decision making with interval valued intuitionistic hesitant fuzzy set.
However, hesitant fuzzy set can not present inconsistent, imprecise, inappropriate and incomplete information because the set has only truth hesitant membership degree to express any element to the set. To handle this problem, Ye (2015b) introduced single valued neutrosophic hesitant fuzzy sets (SVNHFS) which have three hesitant membership functions – truth membership, indeterminacy membership and falsity membership functions. Interval neutrosophic hesitant fuzzy sets (INHFS) (Liu and Shi, 2015), a generalization of SVNHFS, are also powerful to resolve the difficulty in decision making problem. Ye (2016) developed correlation coefficients of interval neutrosophic hesitant fuzzy sets and its application in the MADM method. SVNHFS and INHFS further give possibility to handle uncertain, incomplete, inconsistent information in real world decision making problems. Sahin and Liu (2017) defined correlation coefficient of SVNHFS and applied it in decision making problems. Biswas et al. (2016b) proposed GRA method for MADM with SVNHFS for known attribute weight. Ji et al. (2018) proposed a projection–based TODIM approach under multi-valued neutrosophic environments for personnel selection problem. Biswas et al. (2019b) further extended the GRA method for solving MADM with SVNHFS and INHFS for partially known or unknown attribute weight.
Until now, little research has been done on the TOPSIS method for solving MADM under SVNHFS and INHFS environments. We also observe that the TOPSIS method has not been studied earlier under SVNHFS as well as INHFS environment for solving MADM problems, when the weight information of the attribute is incompletely known or completely unknown. Therefore, we have an opportunity to extend the traditional methods or to propose some new methods for TOPSIS to deal with MADM problems with partially known or unknown weight information under SVNHFS and INHFS environments, which can play an effective role to deal with uncertain and indeterminate information in MADM problems.
In view of the above context, we have the following objectives in this study:
  • • To formulate an SVNHFS based MADM problem, where the weight information is incompletely known and completely unknown.
  • • To determine the weights of attributes given in incompletely known and completely unknown forms using deviation method.
  • • To extend the TOPSIS method for solving an SVNHFS based MADM problem using the proposed optimization model.
  • • To further extend the proposed approach in INHFS environment.
  • • To validate the proposed approach with two numerical examples.
  • • To compare the proposed method with some existing methods.
The remainder of this article is organized as follows. Section 2 gives preliminaries for neutrosophic set, single valued neutrosophic set, interval neutrosophic sets, hesitant fuzzy set, SVNHFS and INHFS. Section 2 also represents score function, accuracy function and distance function of SVNHFS and INHFS. Section 3 and Section 4 develop TOPSIS method for MADM under SVNHFS and INHFS, respectively. Section 5 presents two numerical examples to validate the proposed method and provides a comparative study between the proposed method and existing methods. Finally, conclusion and future research directions are given in Section 6.

2 Preliminaries of Neutrosophic Sets and Single Valued Neutrosophic Set

In this section, we recall some basic definitions of hesitant fuzzy set, single valued neutrosophic set, and interval neutrosophic fuzzy sets.

2.1 Single Valued Neutrosophic Set

Definition 1 (See Smarandache, 1998; Haibin et al., 2010).
A single valued neutrosophic set A in a universe of discourse $ X=({x_{1}},{x_{2}},\dots ,{x_{n}})$ is defined as
(1)
\[ A=\big\{\big\langle {T_{A}}(x),{I_{A}}(x),{F_{A}}(x)\big\rangle \hspace{0.1667em}\big|\hspace{0.1667em}x\in X\big\},\]
where the functions $ {T_{A}}(x)$, $ {I_{A}}(x)$ and $ {F_{A}}(x)$, respectively, denote the truth, indeterminacy and falsity membership functions of $ x\in X$ to the set $ \tilde{A}$, with the conditions $ 0\leqslant {T_{A}}(x)\leqslant 1$, $ 0\leqslant {I_{A}}(x)\leqslant 1$, $ 0\leqslant {F_{A}}(x)\leqslant 1$, and
(2)
\[ 0\leqslant {T_{A}}(x)+{I_{A}}(x)+{F_{A}}(x)\leqslant 3.\]
For convenience, we take a single valued neutrosophic set $ A=\{\langle {T_{A}}(x),{I_{A}}(x),{F_{A}}(x)\rangle \mid x\in X\}$ as $ A=\langle {T_{A}},{I_{A}},{F_{A}}\rangle $ and call it single valued neutrosophic number (SVNN).

2.2 Interval Neutrosophic Set

Definition 2 (See Wang et al., 2005).
Let X be a non empty finite set. Let $ D[0,1]$ be the set of all closed sub-intervals of the unit interval $ [0,1]$. An interval neutrosophic set (INS) $ \tilde{A}$ in X is an object having the form:
(3)
\[ \tilde{A}=\big\{\big\langle x,{T_{\tilde{A}}}(x),{I_{\tilde{A}}}(x),{F_{\tilde{A}}}(x)\big\rangle |x\in X\big\},\]
where $ {T_{\tilde{A}}}:X\to D[0,1]$, $ {I_{\tilde{A}}}:X\to D[0,1]$, $ {F_{\tilde{A}}}:X\to D[0,1]$ with the condition $ 0\leqslant {T_{\tilde{A}}}(x)+{I_{\tilde{A}}}(x)+{F_{\tilde{A}}}(x)\leqslant 3$ for any $ x\in X$. The intervals $ {T_{\tilde{A}}}(x)$, $ {I_{\tilde{A}}}(x)$ and $ {F_{\tilde{A}}}(x)$ denote, respectively, the truth, the indeterminacy and the falsity membership degrees of x to $ \tilde{A}$. Then, for each $ x\in X$, the lower and the upper limit points of closed intervals of $ {T_{\tilde{A}}}(x)$, $ {I_{\tilde{A}}}(x)$ and $ {F_{\tilde{A}}}(x)$ are denoted by $ [{T_{\tilde{A}}^{L}}(x)$, $ {T_{\tilde{A}}^{U}}(x)]$, $ [{I_{\tilde{A}}^{L}}(x)$, $ {I_{\tilde{A}}^{U}}(x)]$, and $ [{F_{\tilde{A}}^{L}}(x)$, $ {F_{\tilde{A}}^{U}}(x)]$, respectively. Thus INS $ \tilde{A}$ can also be presented in the following form:
\[ \tilde{A}=\big\{\big\langle x,\big[{T_{\tilde{A}}^{L}}(x),{T_{\tilde{A}}^{U}}(x)\big],\big[{I_{\tilde{A}}^{L}}(x),{I_{\tilde{A}}^{U}}(x)\big],\big[{F_{\tilde{A}}^{L}}(x),{F_{\tilde{A}}^{U}}(x)\big]\big\rangle \hspace{0.1667em}\big|\hspace{0.1667em}x\in X\big\},\]
where, $ 0\leqslant {T_{\tilde{A}}^{U}}(x)+{I_{\tilde{A}}^{U}}(x)+{F_{\tilde{A}}^{U}}(x)\leqslant 3$ for any $ x\in X$. For convenience of notation, we consider $ \tilde{A}=\langle [{T_{\tilde{A}}^{L}},{T_{\tilde{A}}^{U}}],[{I_{\tilde{A}}^{L}},{I_{\tilde{A}}^{U}}],[{F_{\tilde{A}}^{L}},{F_{\tilde{A}}^{U}}]\rangle $ as an interval neutrosophic number (INN), where $ 0\leqslant {T_{\tilde{A}}^{U}}+{I_{\tilde{A}}^{U}}+{F_{\tilde{A}}^{U}}\leqslant 3$ for any $ x\in X$.

2.3 Hesitant Fuzzy Set

Definition 3 (See Torra, 2010).
Let X be a universe of discourse. A hesitant fuzzy set on X is symbolized by
(4)
\[ A=\big\{\big\langle x,{h_{A}}(x)\big\rangle \hspace{0.1667em}\big|\hspace{0.1667em}x\in X\big\},\]
where $ {h_{A}}(x)$, referred to as the hesitant fuzzy element, is a set of some values in $ [0,1]$ denoting the possible membership degree of the element $ x\in X$ to the set A.
From mathematical point of view, an HFS A can be seen as an FS if there is only one element in $ {h_{A}}(x)$. For notational convenience, we assume h as hesitant fuzzy element $ {h_{A}}(x)$ for $ x\in X$.
Definition 4 (See Chen et al., 2013).
Let X be a non-empty finite set. An interval hesitant fuzzy set on X is represented by
\[ E=\big\{\big\langle x,{\tilde{h}_{E}}(x)\big\rangle \hspace{0.1667em}\big|\hspace{0.1667em}x\in X\big\},\]
where $ {\tilde{h}_{E}}(x)$ is a set of some different interval values in $ [0,1]$, which denote the possible membership degrees of the element $ x\in X$ to the set E. $ {\tilde{h}_{E}}(x)$ can be represented by an interval hesitant fuzzy element $ \tilde{h}$ which is denoted by $ \{\tilde{\gamma }|\tilde{\gamma }\in \tilde{h}\}$, where $ \tilde{\gamma }=[{\gamma ^{L}},{\gamma ^{U}}]$ is an interval number.
Definition 5 (See Ye, 2015a).
Let X be a fixed set. Then a N on X is defined as
(5)
\[ N=\big\{\big\langle x,t(x),i(x),f(x)\big\rangle \mid x\in X\big\},\]
in which $ t(x)$, $ i(x)$ and $ f(x)$ represent three sets of some values in $ [0,1]$, denoting, respectively, the possible truth, indeterminacy and falsity membership degrees of the element $ x\in X$ to the set N. The membership degrees $ t(x)$, $ i(x)$ and $ f(x)$ satisfy the following conditions:
\[ 0\leqslant \delta ,\gamma ,\eta \leqslant 1,\hspace{2em}0\leqslant {\delta ^{+}}+{\gamma ^{+}}+{\eta ^{+}}\leqslant 3,\]
where $ \delta \in t(x)$, $ \gamma \in i(x)$, $ \eta \in f(x)$, $ {\delta ^{+}}\in {t^{+}}(x)={\textstyle\bigcup _{\delta \in t(x)}}\max t(x)$, $ {\gamma ^{+}}\in {i^{+}}(x)={\textstyle\bigcup _{\gamma \in t(x)}}\max i(x)$ and $ {\eta ^{+}}\in {f^{+}}(x)={\textstyle\bigcup _{\eta \in f(x)}}\max f(x)$ for all $ x\in X$.
$ n(x)=\langle t(x),i(x),f(x)\rangle $ is called single valued neutrosophic hesitant fuzzy element (SVNHFE) denoted by $ n=\langle t,i,f\rangle $. The number of values for possible truth, indeterminacy and falsity membership degrees of the element in different SVNHFEs may be different.
Definition 6 (See Liu and Shi, 2015).
Let X be a non-empty finite set. Then an interval neutrosophic hesitant fuzzy set on X is represented by
\[ \tilde{n}=\big\{\big\langle x,\tilde{t}(x),\tilde{i}(x),\tilde{f}(x)\big\rangle \hspace{0.1667em}\big|\hspace{0.1667em}x\in X\big\},\]
where $ \tilde{t}(x)=\{\tilde{\gamma }|\tilde{\gamma }\in \tilde{t}(x)\}$, $ \tilde{i}(x)=\{\tilde{\gamma }|\tilde{\gamma }\in \tilde{i}(x)\}$ and $ \tilde{f}(x)=\{\tilde{\gamma }|\tilde{\gamma }\in \tilde{f}(x)\}$ are three sets of some interval values in real unit interval $ [0,1]$, which denote the possible truth, indeterminacy and falsity membership hesitant degrees of the element $ x\in X$ to the set N. These values satisfy the limits:
\[ \tilde{\gamma }=\big[{\gamma ^{L}},{\gamma ^{U}}\big]\subseteq [0,1],\hspace{2em}\tilde{\delta }=\big[{\delta ^{L}},{\delta ^{U}}\big]\subseteq [0,1],\hspace{2em}\tilde{\eta }=\big[{\eta ^{L}},{\eta ^{U}}\big]\subseteq [0,1]\]
and $ 0\leqslant {\tilde{\gamma }^{+}}+{\tilde{\delta }^{+}}+{\tilde{\eta }^{+}}\leqslant 3$, where $ {\tilde{\gamma }^{+}}={\textstyle\bigcup _{\tilde{\gamma }\in \tilde{t}(x)}}\sup \tilde{t}(x)$, $ {\tilde{\delta }^{+}}={\textstyle\bigcup _{\tilde{\delta }\in \tilde{t}(x)}}\sup \tilde{i}(x)$ and $ {\tilde{\eta }^{+}}={\textstyle\bigcup _{\tilde{\eta }\in \tilde{t}(x)}}\sup \tilde{f}(x)$. Then $ \tilde{n}=\{\tilde{t}(x),\tilde{i}(x),\tilde{f}(x)\}$ is called an interval neutrosophic hesitant fuzzy element (INHFE) which is the basic unit of the INHFS and is represented by the symbol $ \tilde{n}=\{\tilde{t},\tilde{i},\tilde{f}\}$ for convenience.

2.4 Score Function, Accuracy Function and Distance Function of SVNHFEs and INHFEs

Definition 7 (See Biswas et al., 2016b).
Let $ {n_{i}}=\langle {t_{i}},{i_{i}},{f_{i}}\rangle $ $ (i=1,2,\dots ,n)$ be a collection of SVNHFEs. Then the score function $ S({n_{i}})$, the accuracy function $ A({n_{i}})$ and the certainty function $ C({n_{i}})$ of $ {n_{i}}$ $ (i=1,2,\dots ,n)$ can be defined as follows:
  • 1. $ S({n_{i}})=\frac{1}{3}\big[2+\frac{1}{{l_{t}}}{\textstyle\sum _{\gamma \in t}}\gamma -\frac{1}{{l_{i}}}{\textstyle\sum _{\delta \in i}}\delta -\frac{1}{{l_{f}}}{\textstyle\sum _{\eta \in f}}\eta \big]$;
  • 2. $ A({n_{i}})=\frac{1}{{l_{t}}}{\textstyle\sum _{\gamma \in t}}\gamma -\frac{1}{{l_{f}}}{\textstyle\sum _{\eta \in f}}\eta $;
  • 3. $ C({n_{i}})=\frac{1}{{l_{t}}}{\textstyle\sum _{\gamma \in t}}\gamma $.
Example 1.
Let $ {n_{1}}=\langle \{0.3,0.4,0.5\},\{0.1\},\{0.3,0.4\}\rangle $ be an SVNHFE, then by Definition 7, we have
  • 1. $ S({n_{1}})=\frac{1}{3}[2+\frac{1.2}{3}-0.1-\frac{0.7}{2}]=0.65$;
  • 2. $ A({n_{1}})=\frac{1.2}{3}-\frac{0.7}{2}=0.05$;
  • 3. $ C({n_{1}})=\frac{1.2}{3}=0.4$.
Definition 8 (See Biswas et al., 2016b).
Let $ {n_{1}}=\langle {t_{1}},{i_{1}},{f_{1}}\rangle $ and $ {n_{2}}=\langle {t_{2}},{i_{2}},{f_{2}}\rangle $ be two SVNHFEs. Then the following rules can be defined for comparison purpose:
  • 1. If $ s({n_{1}})>s({n_{2}})$, then $ {n_{1}}$ is greater than $ {n_{2}}$, i.e. $ {n_{1}}$ is superior to $ {n_{2}}$, denoted by $ {n_{1}}\succ {n_{2}}$.
  • 2. If $ s({n_{1}})=s({n_{2}})$ and $ A({n_{1}})>A({n_{2}})$, then $ {n_{1}}$ is greater than $ {n_{2}}$, i.e. $ {n_{1}}$ is superior to $ {n_{2}}$, denoted by $ {n_{1}}\succ {n_{2}}$.
  • 3. If $ s({n_{1}})=s({n_{2}})$ and $ A({n_{1}})=A({n_{2}})$, and $ C({n_{1}})>C({n_{2}})$, then $ {n_{1}}$ is greater than $ {n_{2}}$, i.e. $ {n_{1}}$ is superior to $ {n_{2}}$, denoted by $ {n_{1}}\succ {n_{2}}$.
  • 4. If $ s({n_{1}})=s({n_{2}})$ and $ A({n_{1}})=A({n_{2}})$, and $ C({n_{1}})=C({n_{2}})$, then $ {n_{1}}$ is equal to $ {n_{2}}$, i.e. $ {n_{1}}$ is indifferent to $ {n_{2}}$, denoted by $ {n_{1}}\sim {n_{2}}$.
Example 2.
Let $ {n_{1}}=\langle \{0.3,0.4,0.5\},\{0.1\},\{0.3,0.4\}\rangle $ and $ {n_{2}}=\langle \{0.6,0.7\},\{0.1,0.2\},\{0.2,0.3\}\rangle $ be two SVNHFEs, then by Definition 7, we have
\[\begin{array}{l}\displaystyle S({n_{1}})=0.65,\hspace{2em}A({n_{1}})=0.05,\hspace{2em}C({n_{1}})=0.40,\\ {} \displaystyle S({n_{2}})=0.75,\hspace{2em}A({n_{2}})=0.40,\hspace{2em}C({n_{2}})=0.65.\end{array}\]
Since $ S({n_{2}})>S({n_{1}})$, therefore, we have $ {n_{2}}\succ {n_{1}}$ from Definition 8. We take another example to compare SVNHFEs.
Example 3.
Let $ {n_{1}}=\langle \{0.5,0.6\},\{0.2\},\{0.2,0.3\}\rangle $ and $ {n_{2}}=\langle \{0.7,0.8\},\{0.3\},\{0.3,0.4\}\rangle $ be two SVNHFEs. Then by Definition 7, we have
\[\begin{array}{r}\displaystyle S({n_{1}})=0.70,\hspace{2em}A({n_{1}})=0.30,\hspace{2em}C({n_{1}})=0.55,\\ {} \displaystyle S({n_{2}})=0.70,\hspace{2em}A({n_{2}})=0.40,\hspace{2em}C({n_{2}})=0.75.\end{array}\]
Since $ S({n_{2}})=S({n_{1}})$ and $ A({n_{2}})>A({n_{1}})$, we have $ {n_{2}}\succ {n_{1}}$ from Definition 8.
Definition 9 (See Biswas et al., 2016b).
Let $ {\tilde{n}_{i}}=\langle {\tilde{t}_{i}},{\tilde{i}_{i}},{\tilde{f}_{i}}\rangle $ $ (i=1,2,\dots ,n)$ be a collection of INHFEs. Then the score function $ S({\tilde{n}_{i}})$, the accuracy function $ A({\tilde{n}_{i}})$ and the certainty function $ C({\tilde{n}_{i}})$ of $ {\tilde{n}_{i}}$ $ (i=1,2,\dots ,n)$ can be defined as follows:
  • 1. $ S({\tilde{n}_{i}})=\frac{1}{6}\big[4+\frac{1}{{l_{t}}}{\textstyle\sum _{\gamma \in t}}({\gamma ^{L}}+{\gamma ^{U}})-\frac{1}{{l_{i}}}{\textstyle\sum _{\delta \in i}}({\delta ^{L}}+{\delta ^{U}})-\frac{1}{{l_{f}}}{\textstyle\sum _{\eta \in f}}({\eta ^{L}}+{\eta ^{U}})\big]$;
  • 2. $ A({\tilde{n}_{i}})=\frac{1}{2}\big[\frac{1}{{l_{t}}}{\textstyle\sum _{\gamma \in t}}({\gamma ^{L}}+{\gamma ^{U}})-\frac{1}{{l_{f}}}{\textstyle\sum _{\eta \in f}}({\eta ^{L}}+{\eta ^{U}})\big]$;
  • 3. $ C({\tilde{n}_{i}})=\frac{1}{2}\big[\frac{1}{{l_{t}}}{\textstyle\sum _{\gamma \in t}}({\gamma ^{L}}+{\gamma ^{U}})\big]$.
Example 4.
Let $ {\tilde{n}_{1}}=\langle \{[0.3,0.4],[0.4,0.5]\},\{[0.1,0.2]\},\{[0.3,0.4]\}\rangle $ be an INHFE, then by the above definition, we have
  • 1. $ S({\tilde{n}_{1}})=\frac{1}{6}\big[4+\frac{1}{2}(0.7+0.9)-(0.1+0.2)-(0.3+0.4)\big]=0.63$;
  • 2. $ A({\tilde{n}_{1}})=\frac{1}{2}\big[\frac{1}{2}(0.7+0.9)-(0.3+0.4)\big]=0.05$;
  • 3. $ C({\tilde{n}_{1}})=\frac{1}{2}\big[\frac{1}{2}(0.7+0.9)\big]=0.4$.
Definition 10.
Let $ {n_{1}}=\langle {t_{1}},{i_{1}},{f_{1}}\rangle $ and $ {n_{2}}=\langle {t_{2}},{i_{2}},{f_{2}}\rangle $ be two INHFEs. Then the following rules can be defined to compare INHFEs:
  • 1. If $ s({\tilde{n}_{1}})>s({\tilde{n}_{2}})$, then $ {\tilde{n}_{1}}$ is greater than $ {\tilde{n}_{2}}$, denoted by $ {\tilde{n}_{1}}\succ {\tilde{n}_{2}}$.
  • 2. If $ s({\tilde{n}_{1}})=s({\tilde{n}_{2}})$ and $ A({\tilde{n}_{1}})>A({\tilde{n}_{2}})$, then $ {\tilde{n}_{1}}$ is greater than $ {\tilde{n}_{2}}$, denoted by $ {\tilde{n}_{1}}\succ {\tilde{n}_{2}}$.
  • 3. If $ s({\tilde{n}_{1}})=s({\tilde{n}_{2}})$ and $ A({\tilde{n}_{1}})=A({\tilde{n}_{2}})$, and $ C({\tilde{n}_{1}})>C({\tilde{n}_{2}})$, then $ {\tilde{n}_{1}}$ is greater than $ {\tilde{n}_{2}}$, denoted by $ {\tilde{n}_{1}}\succ {\tilde{n}_{2}}$.
  • 4. If $ s({\tilde{n}_{1}})=s({\tilde{n}_{2}})$ and $ A({\tilde{n}_{1}})=A({\tilde{n}_{2}})$, and $ C({\tilde{n}_{1}})=C({\tilde{n}_{2}})$, then $ {\tilde{n}_{1}}$ is equal to $ {\tilde{n}_{2}}$, denoted by $ {\tilde{n}_{1}}\sim {\tilde{n}_{2}}$.
Example 5.
Let $ {\tilde{n}_{1}}=\langle \{[0.3,0.4],[0.4,0.5]\},\{[0.1,0.2]\},\{[0.3,0.4]\}\rangle $ and $ {\tilde{n}_{2}}=\langle \{[0.5,0.6]\},\{[0.1,0.2],[0.2,0.3]\},\{[0.2,0.3]\}\rangle $ be two INHFEs, then by Definition 9, we have
\[\begin{array}{l}\displaystyle S({\tilde{n}_{1}})=0.63,\hspace{2em}A({\tilde{n}_{1}})=0.05,\hspace{2em}C({\tilde{n}_{1}})=0.40;\\ {} \displaystyle S({\tilde{n}_{2}})=0.70,\hspace{2em}A({\tilde{n}_{2}})=0.30,\hspace{2em}C({\tilde{n}_{2}})=0.55.\end{array}\]
Following Definition 10, and the relation $ S({\tilde{n}_{2}})>S({\tilde{n}_{1}})$ we can say $ {n_{2}}\succ {n_{1}}$.
Definition 11 (See Biswas et al., 2018).
Let $ {n_{1}}=\langle {t_{1}},{i_{1}},{f_{1}}\rangle $ and $ {n_{2}}=\langle {t_{2}},{i_{2}},{f_{2}}\rangle $ be two SVNHFEs. Then the normalized Hamming distance between $ {n_{1}}$ and $ {n_{2}}$ is defined as follows:
(6)
\[\begin{array}{r@{\hskip4.0pt}c@{\hskip4.0pt}l}\displaystyle D({n_{1}},{n_{2}})& \displaystyle =& \displaystyle \frac{1}{3}\bigg(\bigg|\frac{1}{{l_{{t_{1}}}}}\sum \limits_{{\gamma _{1}}\in {t_{1}}}{\gamma _{1}}-\frac{1}{{l_{{t_{2}}}}}\sum \limits_{{\gamma _{2}}\in {t_{2}}}{\gamma _{2}}\bigg|+\bigg|\frac{1}{{l_{{i_{1}}}}}\sum \limits_{{\delta _{1}}\in {i_{1}}}{\delta _{1}}-\frac{1}{{l_{{i_{2}}}}}\sum \limits_{{\delta _{2}}\in {i_{2}}}{\delta _{2}}\bigg|\\ {} & & \displaystyle +\bigg|\frac{1}{{l_{{f_{1}}}}}\sum \limits_{{\eta _{1}}\in {f_{1}}}{\eta _{1}}-\frac{1}{{l_{{f_{2}}}}}\sum \limits_{{\eta _{2}}\in {f_{2}}}{\eta _{2}}\bigg|\bigg),\end{array}\]
where $ {l_{{t_{k}}}}$, $ {l_{{i_{k}}}}$ and $ {l_{{f_{k}}}}$ are numbers of possible membership values in $ {n_{k}}$ for $ k=1,2$.
Example 6.
Let $ {n_{1}}=\langle \{0.3,0.4,0.5\},\{0.1\},\{0.3,0.4\}\rangle $ and $ {n_{2}}=\langle \{0.6,0.7\},\{0.1,0.2\},\{0.2,0.3\}\rangle $ be two SVNHFEs, then by the above definition, we have the distance measure between $ {n_{1}}$ and $ {n_{2}}$
\[\begin{array}{r@{\hskip4.0pt}c@{\hskip4.0pt}l}\displaystyle D({n_{1}},{n_{2}})& \displaystyle =& \displaystyle \frac{1}{3}\bigg(\bigg|\frac{1}{3}(0.3+0.4+0.5)-\frac{1}{2}(0.6+0.7)\bigg|+\bigg|0.1-\frac{1}{2}(0.1+0.2)\bigg|\\ {} & & \displaystyle +\bigg|\frac{1}{2}(0.3+0.4)-\frac{1}{2}(0.2+0.3)\bigg|\bigg)\\ {} & \displaystyle =& \displaystyle 0.1333.\end{array}\]
Definition 12 (See Biswas et al., 2018).
Let $ {\tilde{n}_{1}}=\langle {\tilde{t}_{1}},{\tilde{i}_{1}},{\tilde{f}_{1}}\rangle $ and $ {\tilde{n}_{2}}=\langle {\tilde{t}_{2}},{\tilde{i}_{2}},{\tilde{f}_{2}}\rangle $ be two INHFEs. Then the normalized Hamming distance between $ {\tilde{n}_{1}}$ and $ {\tilde{n}_{2}}$ is defined as follows:
(7)
\[\begin{array}{l}\displaystyle \tilde{D}({\tilde{n}_{1}},{\tilde{n}_{2}})\\ {} \displaystyle \hspace{1em}=\frac{1}{6}\left(\hspace{-0.1667em}\hspace{-0.1667em}\begin{array}{l}\big|\frac{1}{{l_{{\tilde{t}_{1}}}}}{\textstyle\sum _{{\gamma _{1}}\in {\tilde{t}_{1}}}}{\gamma _{1}^{L}}-\frac{1}{{l_{{\tilde{t}_{2}}}}}{\textstyle\sum _{{\gamma _{2}}\in {\tilde{t}_{2}}}}{\gamma _{2}^{L}}\big|+\big|\frac{1}{{l_{{\tilde{t}_{1}}}}}{\textstyle\sum _{{\gamma _{1}}\in {\tilde{t}_{1}}}}{\gamma _{1}^{U}}-\frac{1}{{l_{{\tilde{t}_{2}}}}}{\textstyle\sum _{{\gamma _{2}}\in {\tilde{t}_{2}}}}{\gamma _{2}^{U}}\big|\\ {} \hspace{2.5pt}\hspace{2.5pt}+\big|\frac{1}{{l_{{\tilde{i}_{1}}}}}{\textstyle\sum _{{\delta _{1}}\in {\tilde{i}_{1}}}}{\delta _{1}^{L}}-\frac{1}{{l_{{\tilde{i}_{2}}}}}{\textstyle\sum _{{\delta _{2}}\in {\tilde{i}_{2}}}}{\delta _{2}^{L}}\big|+\big|\frac{1}{{l_{{\tilde{i}_{1}}}}}{\textstyle\sum _{{\delta _{1}}\in {\tilde{i}_{1}}}}{\delta _{1}^{U}}-\frac{1}{{l_{{\tilde{i}_{2}}}}}{\textstyle\sum _{{\delta _{2}}\in {\tilde{i}_{2}}}}{\delta _{2}^{U}}\big|\\ {} \hspace{2.5pt}\hspace{2.5pt}+\big|\frac{1}{{l_{{\tilde{f}_{1}}}}}{\textstyle\sum _{{\eta _{1}}\in {\tilde{f}_{1}}}}{\eta _{1}^{L}}-\frac{1}{{l_{{\tilde{f}_{2}}}}}{\textstyle\sum _{{\eta _{2}}\in {\tilde{f}_{2}}}}{\eta _{2}^{L}}\big|+\big|\frac{1}{{l_{{\tilde{f}_{1}}}}}{\textstyle\sum _{{\eta _{1}}\in {\tilde{f}_{1}}}}{\eta _{1}^{U}}-\frac{1}{{l_{{\tilde{f}_{2}}}}}{\textstyle\sum _{{\eta _{2}}\in {\tilde{f}_{2}}}}{\eta _{2}^{U}}\big|\end{array}\hspace{-0.1667em}\hspace{-0.1667em}\right),\end{array}\]
where $ {l_{{\tilde{t}_{k}}}}$, $ {l_{{\tilde{i}_{k}}}}$ and $ {l_{{\tilde{f}_{k}}}}$ are numbers of possible membership values in $ {n_{k}}$ for $ k=1,2$.
Example 7.
Let $ {\tilde{n}_{1}}=\langle \{[0.3,0.4],[0.4,0.5]\},\{[0.1,0.2]\},\{[0.3,0.4]\}\rangle $ and $ {\tilde{n}_{2}}=\langle \{[0.5,0.6]\},\{[0.1,0.2],[0.2,0.3]\},\{[0.2,0.3]\}\rangle $ be two INHFEs. Using the above definition, we have the distance measure between $ {\tilde{n}_{1}}$ and $ {\tilde{n}_{2}}$
\[\begin{array}{r@{\hskip4.0pt}c@{\hskip4.0pt}l}\displaystyle \tilde{D}({\tilde{n}_{1}},{\tilde{n}_{2}})& \displaystyle =& \displaystyle \frac{1}{6}\left(\begin{array}{l}|\frac{1}{2}(0.3+0.4)-0.5|+|\frac{1}{2}(0.4+0.5)-0.6|\\ {} \hspace{1em}+|0.1-\frac{1}{2}(0.1+0.2)|+|0.2-\frac{1}{2}(0.2+0.3)|\\ {} \hspace{1em}+|0.3-0.2|+|0.4-0.3|\end{array}\right)\\ {} & \displaystyle =& \displaystyle \frac{1}{6}(0.15+0.15+0.05+0.05+0.10+0.10)=0.10.\end{array}\]

3 TOPSIS Method for MADM with SVNHFS Information

In this section, we propose TOPSIS method to find out the best alternative in MADM with SVNHFSs. Suppose that $ A=\{{A_{1}},{A_{2}},\dots ,{A_{m}}\}$ be the discrete set of m alternatives and $ C=\{{C_{1}},{C_{2}},\dots ,{C_{n}}\}$ be the set of n attributes for a SVNHFSs based multi-attribute decision making problem. Also, assume that the rating value of the i-th alternative $ {A_{i}}$ $ (i=1,2,\dots ,m)$ over the attribute $ {C_{j}}$ $ (j=1,2,\dots ,n)$ is considered with SVNHFSs $ {x_{ij}}=({t_{ij}},{i_{ij}},{f_{ij}})$, where $ {t_{ij}}=\{{\gamma _{ij}}\mid {\gamma _{ij}}\in {t_{ij}},0\leqslant {\gamma _{ij}}\leqslant 1\}$, $ {i_{ij}}=\{{\delta _{ij}}\mid {\delta _{ij}}\in {i_{ij}},0\leqslant {\delta _{ij}}\leqslant 1\}$ and $ {f_{ij}}=\{{\eta _{ij}}\mid {\eta _{ij}}\in {f_{ij}},0\leqslant {\eta _{ij}}\leqslant 1\}$ indicate the possible truth, indeterminacy and falsity membership degrees of the i-th alternative $ {A_{i}}$ over the j-th attribute $ {C_{j}}$ for $ i=1,2,\dots ,m$ and $ j=1,2,\dots ,n$. Then we can construct a SVNHFS based decision matrix $ X={({x_{ij}})_{m\times n}}$ which has entries as the SVNHFSs and can be written as
(8)
\[ X=\left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c}{x_{11}}\hspace{1em}& {x_{12}}\hspace{1em}& \cdots \hspace{1em}& {x_{1n}}\\ {} {x_{21}}\hspace{1em}& {x_{22}}\hspace{1em}& \cdots \hspace{1em}& {x_{2n}}\\ {} \vdots \hspace{1em}& \vdots \hspace{1em}& \ddots \hspace{1em}& \vdots \\ {} {x_{m1}}\hspace{1em}& {x_{m2}}\hspace{1em}& \cdots \hspace{1em}& {x_{mn}}\end{array}\right].\]
Now, we extend the TOPSIS method for MADM in single-valued neutrosophic hesitant fuzzy environment. Before going to discuss in details, we briefly mention some important steps of the proposed model. First, we consider the weights of attributes which may be known, incompletely known or completely unknown. For known cases, we easily employ the weights of attributes in the TOPSIS method with SVNHFs. But the problem arises for later two cases, because we can not employ the incomplete or unknown weights directly in the TOPSIS method under neutrosophic hesitant fuzzy environment. To deal with the issue, we develop optimization models to determine the exact weights of attributes using maximum deviation method (Yingming, 1997). Following TOPSIS method, we then determine the Hamming distance measure of each alternative from the positive and negative ideal solutions. Finally, we obtain the relative closeness co-efficient of each alternative to determine the most preferred alternative.
We elaborate the following steps used in the proposed model.
Step 1.
Determine the weights of attributes.
Case 1a.
If the information of attribute weights is completely known and is given as $ w={({w_{1}},{w_{2}},\dots ,{w_{n}})^{T}}$, with $ {w_{j}}\in [0,1]$ and $ {\textstyle\sum _{j=1}^{n}}{w_{j}}=1$, then go to Step 2.
However, in case of real decision making, due to time pressure, lack of knowledge or decision makers’ limited expertise in the public domain, the information about the attribute weights is often incompletely known or completely unknown. In this situation, when the attribute weights are partially known or completely unknown, we use the maximizing deviation method proposed by Yingming (1997) to deal with MADM problems. For an MADM problem, Yingming suggested that when an attribute has a larger deviation among the alternatives, a larger weight should be assigned and when an attribute has a smaller deviation among the alternatives, a smaller weight should be assigned, and when an attribute has no deviation, zero weight should be assigned.
Now, we develop an optimization model based on maximizing deviation method to determine the optimal relative weights of attributes under SVNHF environment. For the attribute $ {C_{j}}\in C$, the deviation of alternative $ {A_{i}}$ to all the other alternatives can be defined as
(9)
\[ {D_{ij}}(w)={\sum \limits_{k=1}^{m}}{w_{j}}D({x_{ij}},{x_{kj}}),\hspace{1em}\text{for}\hspace{5pt}i=1,2,\dots ,m;\hspace{2.5pt}j=1,2,\dots ,n.\]
In Eq. (6), the Hamming distance $ D({x_{ij}},{x_{kj}})$ is defined as
(10)
\[\begin{array}{r@{\hskip4.0pt}c@{\hskip4.0pt}l}\displaystyle D({x_{ij}},{x_{kj}})& \displaystyle =& \displaystyle \frac{1}{3}\left(\begin{array}{l}\big|\frac{1}{{l_{{t_{ij}}}}}{\textstyle\sum _{{\gamma _{ij}}\in {t_{ij}}}}{\gamma _{ij}}-\frac{1}{{l_{{t_{kj}}}}}{\textstyle\sum _{{\gamma _{kj}}\in {t_{kj}}}}{\gamma _{kj}}\big|\\ {} \hspace{1em}+\big|\frac{1}{{l_{{i_{ij}}}}}{\textstyle\sum _{{\delta _{ij}}\in {i_{ij}}}}{\delta _{ij}}-\frac{1}{{l_{{i_{kj}}}}}{\textstyle\sum _{{\delta _{kj}}\in {i_{kj}}}}{\delta _{kj}}\big|\\ {} \hspace{1em}+\big|\frac{1}{{l_{{f_{ij}}}}}{\textstyle\sum _{{\eta _{ij}}\in {f_{ij}}}}{\eta _{ij}}-\frac{1}{{l_{{f_{kj}}}}}{\textstyle\sum _{{\eta _{kj}}\in {f_{kj}}}}{\eta _{kj}}\big|\end{array}\right)\\ {} & \displaystyle =& \displaystyle \frac{1}{3}\big(\Delta T({x_{ij}},{x_{kj}})+\Delta I({x_{ij}},{x_{kj}})+\Delta F({x_{ij}},{x_{kj}})\big),\end{array}\]
where
\[\begin{array}{r@{\hskip4.0pt}c@{\hskip4.0pt}l}\displaystyle \Delta T({x_{ij}},{x_{kj}})& \displaystyle =& \displaystyle \bigg|\frac{1}{{l_{{t_{ij}}}}}\sum \limits_{{\gamma _{ij}}\in {t_{ij}}}{\gamma _{ij}}-\frac{1}{{l_{{t_{kj}}}}}\sum \limits_{{\gamma _{kj}}\in {t_{kj}}}{\gamma _{kj}}\bigg|,\\ {} \displaystyle \Delta I({x_{ij}},{x_{kj}})& \displaystyle =& \displaystyle \bigg|\frac{1}{{l_{{f_{ij}}}}}\sum \limits_{{\eta _{ij}}\in {f_{ij}}}{\eta _{ij}}-\frac{1}{{l_{{f_{kj}}}}}\sum \limits_{{\eta _{kj}}\in {f_{kj}}}{\eta _{kj}}\bigg|,\\ {} \displaystyle \Delta F({x_{ij}},{x_{kj}})& \displaystyle =& \displaystyle \bigg|\frac{1}{{l_{{f_{ij}}}}}\sum \limits_{{\eta _{ij}}\in {f_{ij}}}{\eta _{ij}}-\frac{1}{{l_{{f_{kj}}}}}\sum \limits_{{\eta _{kj}}\in {f_{kj}}}{\eta _{kj}}\bigg|,\end{array}\]
and $ {l_{{t_{ij}}}}$, $ {l_{{i_{ij}}}}$ and $ {l_{{f_{ij}}}}$ denote the numbers of possible membership values in $ {x_{il}}$ for $ l=j,k$.
We now consider the deviation values of all alternatives to other alternatives for the attribute $ {x_{j}}\in X$ $ (j=1,2,\dots ,n)$:
(11)
\[\begin{array}{r@{\hskip4.0pt}c@{\hskip4.0pt}l}\displaystyle {D_{j}}(w)& \displaystyle =& \displaystyle {\sum \limits_{i=1}^{m}}{D_{ij}}({w_{j}})\\ {} & \displaystyle =& \displaystyle {\sum \limits_{i=1}^{m}}{\sum \limits_{k=1}^{m}}\frac{{w_{j}}}{3}\big(\Delta T({x_{ij}},{x_{kj}})+\Delta I({x_{ij}},{x_{kj}})+\Delta F({x_{ij}},{x_{kj}})\big).\end{array}\]
Case 2a:
The information about the attribute weights is incomplete.
In this case, we develop some model to determine the attribute weights. Suppose that the attribute’s incomplete weight information H is given by
  • 1. A weak ranking: $ \{{w_{i}}\geqslant {w_{j}}\}$, $ i\ne j$;
  • 2. A strict ranking: $ \{{w_{i}}-{w_{j}}\geqslant {\epsilon _{i}}(>0)\}$, $ i\ne j$;
  • 3. A ranking of difference: $ \{{w_{i}}-{w_{j}}\geqslant {w_{k}}-{w_{p}}\}$, $ i\ne j\ne k\ne p$;
  • 4. A ranking with multiples: $ \{{w_{i}}\geqslant {\alpha _{i}}{w_{j}}\}$, $ 0\leqslant {\alpha _{i}}\leqslant 1,i\ne j$;
  • 5. An interval form: $ \{{\beta _{i}}\leqslant {w_{i}}\leqslant {\beta _{i}}+{\epsilon _{i}}(>0)\}$, $ 0\leqslant {\beta _{i}}\leqslant {\beta _{i}}+{\epsilon _{i}}\leqslant 1$.
For these cases, we construct the following constrained optimization model based on the set of known weight information H:
(12)
\[ \text{M--1.}\hspace{2.5pt}\hspace{1em}\left\{\begin{array}{l}\max D(w)={\textstyle\sum \limits_{j=1}^{n}}{\textstyle\sum \limits_{i=1}^{m}}{\textstyle\sum \limits_{k=1}^{m}}\frac{{w_{j}}}{3}\left(\begin{array}{l}\Delta T({x_{ij}},{x_{kj}})+\Delta I({x_{ij}},{x_{kj}})\\ {} \phantom{\Delta T({x_{ij}},{x_{kj}})}+\Delta F({x_{ij}},{x_{kj}})\end{array}\right),\\ {} \text{subject to}\hspace{2.5pt}\hspace{1em}w\in H,\hspace{2.5pt}{w_{j}}\geqslant 0,\hspace{2.5pt}{\textstyle\sum \limits_{j=1}^{n}}{w_{j}}=1,\hspace{2.5pt}j=1,2,\dots ,n.\end{array}\right.\]
By solving Model-1, we can obtain the optimal solution $ w={({w_{1}},{w_{2}},\dots ,{w_{n}})^{T}}$, which can be used as the weight vector of the attributes to proceed to Step 2.
Case 3a:
The information about the attribute weights is completely unknown.
In this case, we develop the following non-linear programming model to select the weight vector W, which maximizes all deviation values for all the attributes:
(13)
\[ \text{M--2.}\hspace{2.5pt}\hspace{1em}\left\{\begin{array}{l}\max D(w)={\textstyle\sum \limits_{j=1}^{n}}{\textstyle\sum \limits_{i=1}^{m}}{\textstyle\sum \limits_{k=1}^{m}}\frac{{w_{j}}}{3}\left(\begin{array}{l}\Delta T({x_{ij}},{x_{kj}})+\Delta I({x_{ij}},{x_{kj}})\\ {} \phantom{\Delta T({x_{ij}},{x_{kj}})}+\Delta F({x_{ij}},{x_{kj}})\end{array}\right),\\ {} \text{s.t.}\hspace{2.5pt}\hspace{1em}{w_{j}}\geqslant 0,\hspace{2.5pt}j=1,2,\dots ,n;\hspace{2.5pt}{\textstyle\sum \limits_{j=1}^{n}}{w_{j}^{2}}=1.\end{array}\right.\]
The Lagrange function corresponding to the above constrained optimization problem is given by
(14)
\[ L(w,\lambda )={\sum \limits_{j=1}^{n}}{\sum \limits_{i=1}^{m}}{\sum \limits_{k=1}^{m}}\frac{{w_{j}}}{3}\left(\begin{array}{l}\Delta T({x_{ij}},{x_{kj}})+\Delta I({x_{ij}},{x_{kj}})\\ {} \phantom{\Delta T({x_{ij}},{x_{kj}})}+\Delta F({x_{ij}},{x_{kj}})\end{array}\right)+\frac{\lambda }{6}\Bigg({\sum \limits_{j=1}^{n}}{w_{j}^{2}}-1\Bigg),\]
where λ is a real number denoting the Lagrange multiplier. The partial derivatives of L with respect to $ {w_{j}}$ and λ are given by
(15)
\[\begin{array}{r@{\hskip4.0pt}c@{\hskip4.0pt}l}\displaystyle \frac{\partial L}{\partial {w_{j}}}& \displaystyle =& \displaystyle \frac{1}{3}{\sum \limits_{i=1}^{m}}{\sum \limits_{k=1}^{m}}\left(\begin{array}{l}\Delta T({x_{ij}},{x_{kj}})+\Delta I({x_{ij}},{x_{kj}})\\ {} \phantom{\Delta T({x_{ij}},{x_{kj}})}+\Delta F({x_{ij}},{x_{kj}})\end{array}\right)+\frac{\lambda }{3}{w_{j}}=0,\end{array}\]
(16)
\[\begin{array}{r@{\hskip4.0pt}c@{\hskip4.0pt}l}\displaystyle \frac{\partial L}{\partial \lambda }& \displaystyle =& \displaystyle \frac{1}{6}\Bigg({\sum \limits_{j=1}^{n}}{w_{j}^{2}}-1\Bigg)=0.\end{array}\]
It follows from Eq. (15) that
(17)
\[ {w_{j}}=-\Bigg({\sum \limits_{i=1}^{m}}{\sum \limits_{k=1}^{m}}\big(\Delta T({x_{ij}},{x_{kj}})+\Delta I({x_{ij}},{x_{kj}})+\Delta F({x_{ij}},{x_{kj}})\big)\Bigg)\Big/\lambda ,\]
for $ i=1,2,\dots ,m$.
Putting this value of $ {w_{j}}$ in (16), we get
(18)
\[ {\lambda ^{2}}={\sum \limits_{j=1}^{n}}{\Bigg({\sum \limits_{i=1}^{m}}{\sum \limits_{k=1}^{m}}\big(\Delta T({x_{ij}},{x_{kj}})+\Delta I({x_{ij}},{x_{kj}})+\Delta F({x_{ij}},{x_{kj}})\big)\Bigg)^{2}}\]
or
(19)
\[ \lambda =-\sqrt{{\sum \limits_{j=1}^{n}}{\Bigg({\sum \limits_{i=1}^{m}}{\sum \limits_{k=1}^{m}}\big(\Delta T({x_{ij}},{x_{kj}})+\Delta I({x_{ij}},{x_{kj}})+\Delta F({x_{ij}},{x_{kj}})\big)\Bigg)^{2}}},\]
where $ \lambda <0$ and
\[ {\sum \limits_{i=1}^{m}}{\sum \limits_{k=1}^{m}}(\Delta T({x_{ij}},{x_{kj}})+\Delta I({x_{ij}},{x_{kj}})+\Delta F({x_{ij}},{x_{kj}}))\]
represents the sum of deviations of all the attributes with respect to the j-th attribute and
\[ {\sum \limits_{j=1}^{n}}{({\sum \limits_{i=1}^{m}}{\sum \limits_{k=1}^{m}}(\Delta T({x_{ij}},{x_{kj}})+\Delta I({x_{ij}},{x_{kj}})+\Delta F({x_{ij}},{x_{kj}})))^{2}}\]
represents the sum of deviations of all the alternatives with respect to all the attributes.
Then by combining equations (17) and (19), we obtain weight $ {w_{j}}$ for $ j=1,2,\dots ,n$ as
(20)
\[ {w_{j}}=\frac{{\textstyle\textstyle\sum _{i=1}^{m}}{\textstyle\textstyle\sum _{k=1}^{m}}(\Delta T({x_{ij}},{x_{kj}})+\Delta I({x_{ij}},{x_{kj}})+\Delta F({x_{ij}},{x_{kj}}))}{\sqrt{{\textstyle\textstyle\sum _{j=1}^{n}}{({\textstyle\textstyle\sum _{i=1}^{m}}{\textstyle\textstyle\sum _{k=1}^{m}}(\Delta T({x_{ij}},{x_{kj}})+\Delta I({x_{ij}},{x_{kj}})+\Delta F({x_{ij}},{x_{kj}})))^{2}}}}.\]
We make the sum of $ {w_{j}}$ $ (j=1,2,\dots ,n)$ into a unit to normalize the weight of the j-th attribute:
(21)
\[ {w_{j}^{N}}=\frac{{w_{j}}}{{\textstyle\textstyle\sum _{j=1}^{n}}{w_{j}}},\hspace{1em}j=1,2,\dots ,n;\]
and consequently, we obtain the weight vector of the attribute as
\[ W=\big({w_{1}^{N}},{w_{2}^{N}},\dots ,{w_{n}^{N}}\big)\]
for proceeding to Step-2.
Step 2.
Determine the positive ideal alternative and negative ideal alternative.
From decision matrix $ X={({x_{ij}})_{m\times n}}$, we can determine the single valued neutrosophic hesitant fuzzy positive ideal solution $ {A^{+}}$ and the single valued neutrosophic hesitant fuzzy negative ideal solution (SVNHFNIS) $ {A^{-}}$ of alternatives as follows:
(22)
\[\begin{array}{l}\displaystyle {A^{+}}=\big({A_{1}^{+}},{A_{2}^{+}},\dots ,{A_{n}^{+}}\big)\\ {} \displaystyle \phantom{{A^{+}}}=\Big\{\Big\langle \underset{i}{\max }\big\{{\gamma _{ij}^{\sigma (p)}}\big\},\underset{i}{\min }\big\{{\delta _{ij}^{\sigma (q)}}\big\},\underset{i}{\min }\big\{{\eta _{ij}^{\sigma (r)}}\big\}\Big\rangle |i=1,2,\dots ,m;j=1,2,\dots ,n\Big\},\end{array}\]
(23)
\[\begin{array}{l}\displaystyle {A^{-}}=\big({A_{1}^{-}},{A_{2}^{-}},\dots ,{A_{n}^{-}}\big)\\ {} \displaystyle \phantom{{A^{-}}}=\Big\{\Big\langle \underset{i}{\min }\big\{{\gamma _{ij}^{\sigma (p)}}\big\},\underset{i}{\max }\big\{{\delta _{ij}^{\sigma (q)}}\big\},\underset{i}{\max }\big\{{\eta _{ij}^{\sigma (r)}}\big\}\Big\rangle |i=1,2,\dots ,m\hspace{2.5pt}\text{and}\hspace{5pt}j=1,2,\dots ,n\Big\}.\end{array}\]
Here, we compare the attribute values $ {x_{ij}}$ by using score, accuracy and certainty values of SVNHFEs defined in Definition 7.
Step 3.
Determine the distance measure from the ideal alternatives to each alternative.
In order to determine the distance measure between the positive ideal alternative $ {A^{+}}$ and the alternative $ {A_{i}}$, we use the following equation:
(24)
\[\begin{array}{r@{\hskip4.0pt}c@{\hskip4.0pt}l}\displaystyle {D_{i}^{+}}& \displaystyle =& \displaystyle {\sum \limits_{j=1}^{n}}{w_{j}}D\big({x_{ij}},{x_{j}^{+}}\big)\\ {} & \displaystyle =& \displaystyle \frac{{w_{j}}}{3}\left(\begin{array}{l}\big|\frac{1}{{l_{{t_{ij}}}}}{\textstyle\sum _{{\gamma _{ij}}\in {t_{ij}}}}{\gamma _{ij}}-\frac{1}{{l_{{t_{j}^{+}}}}}{\textstyle\sum _{{\gamma _{j}^{+}}\in {t_{j}^{+}}}}{\gamma _{j}^{+}}\big|\\ {} \hspace{1em}+\big|\frac{1}{{l_{{i_{ij}}}}}{\textstyle\sum _{{\delta _{ij}}\in {i_{ij}}}}{\delta _{ij}}-\frac{1}{{l_{{i_{kj}}}}}{\textstyle\sum _{{\delta _{j}^{-}}\in {i_{j}^{+}}}}{\delta _{j}^{+}}\big|\\ {} \hspace{1em}+\big|\frac{1}{{l_{{f_{ij}}}}}{\textstyle\sum _{{\eta _{ij}}\in {f_{ij}}}}{\eta _{ij}}-\frac{1}{{l_{{f_{kj}}}}}{\textstyle\sum _{{\eta _{j}^{-}}\in {f_{j}^{+}}}}{\eta _{j}^{+}}\big|\end{array}\right)\end{array}\]
for $ i=1,2,\dots ,m$. Similarly, we can determine the distance measure between the negative ideal alternative $ {A^{-}}$ and the alternative $ {A_{i}}$ $ (i=1,2,\dots ,m)$ by the following equation:
(25)
\[\begin{array}{r@{\hskip4.0pt}c@{\hskip4.0pt}l}\displaystyle {D_{i}^{-}}& \displaystyle =& \displaystyle {\sum \limits_{j=1}^{n}}{w_{j}}D\big({x_{ij}},{x_{j}^{-}}\big)\\ {} & \displaystyle =& \displaystyle \frac{{w_{j}}}{3}\left(\begin{array}{l}\big|\frac{1}{{l_{{t_{ij}}}}}{\textstyle\sum _{{\gamma _{ij}}\in {t_{ij}}}}{\gamma _{ij}}-\frac{1}{{l_{{t_{j}^{-}}}}}{\textstyle\sum _{{\gamma _{j}^{-}}\in {t_{j}^{-}}}}{\gamma _{j}^{-}}\big|\\ {} \hspace{1em}+\big|\frac{1}{{l_{{i_{ij}}}}}{\textstyle\sum _{{\delta _{ij}}\in {i_{ij}}}}{\delta _{ij}}-\frac{1}{{l_{{i_{kj}}}}}{\textstyle\sum _{{\delta _{j}^{+}}\in {i_{j}^{-}}}}{\delta _{j}^{-}}\big|\\ {} \hspace{1em}+\big|\frac{1}{{l_{{f_{ij}}}}}{\textstyle\sum _{{\eta _{ij}}\in {f_{ij}}}}{\eta _{ij}}-\frac{1}{{l_{{f_{kj}}}}}{\textstyle\sum _{{\eta _{j}^{-}}\in {f_{j}^{+}}}}{\eta _{j}^{-}}\big|\end{array}\right)\end{array}\]
for $ i=1,2,\dots ,m$.
Step 4.
Determine the relative closeness coefficient.
We determine closeness coefficient $ {C_{i}}$ for each alternative $ {A_{i}}$ $ (i=1,2,\dots ,m)$ with respect to SVNHFPIS $ {A^{+}}$ by using the following equation:
(26)
\[ R{C_{i}}=\frac{{D_{i}^{-}}}{{D_{i}^{+}}+{D_{i}^{-}}}\hspace{1em}\text{for}\hspace{5pt}i=1,2,\dots ,m,\]
where $ 0\leqslant {C_{i}}\leqslant 1$ $ (i=1,2,\dots ,m)$. We observe that an alternative $ {A_{i}}$ is closer to the SVNHFPIS $ {A^{+}}$ and farther to the SVNHFNIS $ {A^{-}}$ as $ {C_{i}}$ approaches unity.
Step 5.
Rank the alternatives.
We can rank the alternatives according to the descending order of relative closeness coefficient values of alternatives to determine the best alternative from a set of feasible alternatives.
Step 6.
End.

4 TOPSIS Method for MADM with INHFS Information

In this section, we further extend the proposed model into interval neutrosophic hesitant fuzzy environment.
For an MADM problem, let $ A=({A_{1}},{A_{2}},\dots ,{A_{m}})$ be a set of alternatives, $ C=({C_{1}},{C_{2}},\dots ,{C_{n}})$ be a set of attributes, and $ \tilde{W}={({\tilde{w}_{1}},{\tilde{w}_{2}},\dots ,{\tilde{w}_{n}})^{T}}$ be the weight vector of the attributes such that $ {\tilde{w}_{j}}\in [0,1]$ and $ {\textstyle\sum _{j=1}^{n}}{\tilde{w}_{j}}=1$.
Suppose that $ \tilde{X}={({\tilde{x}_{ij}})_{m\times n}}$ be the decision matrix where $ {\tilde{x}_{ij}}$ be the INHFS for the alternative $ {A_{i}}$ with respect to the attribute $ {C_{j}}$ and $ {\tilde{x}_{ij}}=({\tilde{t}_{ij}},{\tilde{i}_{ij}},{\tilde{f}_{ij}})$, where $ {\tilde{t}_{ij}},{\tilde{i}_{ij}}$, and $ {\tilde{f}_{ij}}$ are truth, indeterminacy and falsity membership degree, respectively. The decision matrix is given by
(27)
\[ \tilde{X}=\left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c}{\tilde{x}_{11}}\hspace{1em}& {\tilde{x}_{12}}\hspace{1em}& \cdots \hspace{1em}& {\tilde{x}_{1n}}\\ {} {\tilde{x}_{21}}\hspace{1em}& {\tilde{x}_{22}}\hspace{1em}& \cdots \hspace{1em}& {\tilde{x}_{2n}}\\ {} \vdots \hspace{1em}& \vdots \hspace{1em}& \ddots \hspace{1em}& \vdots \\ {} {\tilde{x}_{m1}}\hspace{1em}& {\tilde{x}_{m2}}\hspace{1em}& \cdots \hspace{1em}& {\tilde{x}_{mn}}\end{array}\right].\]
Now, we develop TOPSIS method based on INHFS when the attribute weights are completely known, partially known or completely unknown.
Step 1.
Determine the weights of the attributes.
We suppose that attribute weights are completely known, partially known or completely unknown. We use the maximum deviation method when the attribute weights are partially known or completely unknown.
Case 1b.
The information of attribute weights is completely known
Assume the attribute weights as $ \tilde{w}$ = $ {({\tilde{w}_{1}},{\tilde{w}_{2}},\dots ,{\tilde{w}_{n}})^{T}}$ with $ {\tilde{w}_{j}}\in [0,1]$ and $ {\textstyle\sum _{j=1}^{n}}{\tilde{w}_{j}}=1$ and then go to Step 2.
For partially known or completely unknown attribute weights, we calculate the deviation values of the alternative $ {A_{i}}$ to other alternatives under the attribute $ {C_{j}}$ defined as follows:
(28)
\[ {\tilde{D}_{ij}}(\tilde{w})={\sum \limits_{k=1}^{m}}{\tilde{w}_{j}}D({\tilde{x}_{ij}},{\tilde{x}_{kj}}),\hspace{1em}\text{for}\hspace{5pt}i=1,2,\dots ,m;j=1,2,\dots ,n.\]
Using (7), the Hamming distance $ \tilde{D}({\tilde{x}_{ij}},{\tilde{x}_{kj}})$ is obtained as
(29)
\[\begin{array}{l}\displaystyle \tilde{D}({\tilde{x}_{ij}},{\tilde{x}_{kj}})\\ {} \displaystyle \hspace{1em}=\displaystyle \frac{1}{6}\left(\hspace{-0.1667em}\hspace{-0.1667em}\begin{array}{l}\big|\frac{1}{{l_{{\tilde{t}_{ij}}}}}{\textstyle\sum _{{\tilde{\gamma }_{ij}}\in {\tilde{t}_{ij}}}}{{\gamma _{ij}}^{L}}-\frac{1}{{l_{{\tilde{t}_{ij}}}}}{\textstyle\sum _{{\tilde{\gamma }_{ij}}\in {\tilde{t}_{kj}}}}{{\gamma _{ij}}^{L}}\big|+\big|\frac{1}{{l_{{\tilde{t}_{ij}}}}}{\textstyle\sum _{{\tilde{\gamma }_{ij}}\in {t_{ij}}}}{{\gamma _{ij}}^{U}}-\frac{1}{{l_{{\tilde{t}_{ij}}}}}{\textstyle\sum _{{\tilde{\gamma }_{ij}}\in {\tilde{t}_{kj}}}}{{\gamma _{ij}}^{U}}\big|\\ {} \hspace{1em}+\big|\frac{1}{{l_{{\tilde{i}_{ij}}}}}{\textstyle\sum _{{\tilde{\delta }_{ij}}\in {i_{ij}}}}{{\delta _{ij}}^{L}}-\frac{1}{{l_{{\tilde{i}_{ij}}}}}{\textstyle\sum _{{\tilde{\delta }_{ij}}\in {\tilde{i}_{kj}}}}{{\delta _{ij}}^{L}}\big|+\big|\frac{1}{{l_{{\tilde{i}_{ij}}}}}{\textstyle\sum _{{\tilde{\delta }_{ij}}\in {i_{ij}}}}{{\delta _{ij}}^{U}}-\frac{1}{{l_{{\tilde{i}_{ij}}}}}{\textstyle\sum _{{\tilde{\delta }_{ij}}\in {\tilde{i}_{kj}}}}{{\delta _{ij}}^{U}}\big|\\ {} \hspace{1em}+\big|\frac{1}{{l_{{\tilde{f}_{ij}}}}}{\textstyle\sum _{{\tilde{\eta }_{ij}}\in {f_{ij}}}}{{\eta _{ij}}^{L}}-\frac{1}{{l_{{\tilde{f}_{ij}}}}}{\textstyle\sum _{{\tilde{\eta }_{ij}}\in {\tilde{f}_{kj}}}}{{\eta _{ij}}^{L}}\big|+\big|\frac{1}{{l_{{\tilde{f}_{ij}}}}}{\textstyle\sum _{{\tilde{\eta }_{ij}}\in {f_{ij}}}}{{\eta _{ij}}^{U}}-\frac{1}{{l_{{\tilde{f}_{ij}}}}}{\textstyle\sum _{{\tilde{\eta }_{ij}}\in {\tilde{f}_{kj}}}}{{\eta _{ij}}^{U}}\big|\end{array}\hspace{-0.1667em}\hspace{-0.1667em}\right)\\ {} \displaystyle \hspace{1em}=\frac{1}{6}\big(\Delta \tilde{T}({\tilde{x}_{ij}},{\tilde{x}_{kj}})+\Delta \tilde{I}({\tilde{x}_{ij}},{\tilde{x}_{kj}})+\Delta \tilde{F}({\tilde{x}_{ij}},{\tilde{x}_{kj}})\big),\end{array}\]
where
\[\begin{array}{l}\displaystyle \Delta \tilde{T}({\tilde{x}_{ij}},{\tilde{x}_{kj}})\\ {} \displaystyle \hspace{1em}=\bigg|\frac{1}{{l_{{\tilde{t}_{ij}}}}}\sum \limits_{{\tilde{\gamma }_{ij}}\in {\tilde{t}_{ij}}}{{\gamma _{ij}}^{L}}-\frac{1}{{l_{{\tilde{t}_{ij}}}}}\sum \limits_{{\tilde{\gamma }_{ij}}\in {\tilde{t}_{kj}}}{{\gamma _{ij}}^{L}}\bigg|+\bigg|\frac{1}{{l_{{\tilde{t}_{ij}}}}}\sum \limits_{{\tilde{\gamma }_{ij}}\in {t_{ij}}}{{\gamma _{ij}}^{U}}-\frac{1}{{l_{{\tilde{t}_{ij}}}}}\sum \limits_{{\tilde{\gamma }_{ij}}\in {\tilde{t}_{kj}}}{{\gamma _{ij}}^{U}}\bigg|;\\ {} \displaystyle \Delta I({x_{ij}},{x_{kj}})\\ {} \displaystyle \hspace{1em}=\bigg|\frac{1}{{l_{{\tilde{i}_{ij}}}}}\sum \limits_{{\tilde{\delta }_{ij}}\in {i_{ij}}}{{\delta _{ij}}^{L}}-\frac{1}{{l_{{\tilde{i}_{ij}}}}}\sum \limits_{{\tilde{\delta }_{ij}}\in {\tilde{i}_{kj}}}{{\delta _{ij}}^{L}}\bigg|+\bigg|\frac{1}{{l_{{\tilde{i}_{ij}}}}}\sum \limits_{{\tilde{\delta }_{ij}}\in {i_{ij}}}{{\delta _{ij}}^{U}}-\frac{1}{{l_{{\tilde{i}_{ij}}}}}\sum \limits_{{\tilde{\delta }_{ij}}\in {\tilde{i}_{kj}}}{{\delta _{ij}}^{U}}\bigg|;\\ {} \displaystyle \Delta \tilde{F}({\tilde{x}_{ij}},{\tilde{x}_{kj}})\\ {} \displaystyle \hspace{1em}=\bigg|\frac{1}{{l_{{\tilde{f}_{ij}}}}}\sum \limits_{{\tilde{\eta }_{ij}}\in {f_{ij}}}{{\eta _{ij}}^{L}}-\frac{1}{{l_{{\tilde{f}_{ij}}}}}\sum \limits_{{\tilde{\eta }_{ij}}\in {\tilde{f}_{kj}}}{{\eta _{ij}}^{L}}\bigg|+\bigg|\frac{1}{{l_{{\tilde{f}_{ij}}}}}\sum \limits_{{\tilde{\eta }_{ij}}\in {f_{ij}}}{{\eta _{ij}}^{U}}-\frac{1}{{l_{{\tilde{f}_{ij}}}}}\sum \limits_{{\tilde{\eta }_{ij}}\in {\tilde{f}_{kj}}}{{\eta _{ij}}^{U}}\bigg|,\end{array}\]
and $ {l_{{\tilde{t}_{ij}}}}$, $ {l_{{\tilde{i}_{ij}}}}$ and $ {l_{{\tilde{f}_{ij}}}}$ are numbers of possible membership values in $ {x_{il}}$ for $ l=j,k$.
The deviation values of all the alternatives to the other alternatives for the attribute $ {C_{j}}$ $ (j=1,2,\dots ,n)$ can be obtained from the following:
(30)
\[\begin{array}{r@{\hskip4.0pt}c@{\hskip4.0pt}l}\displaystyle {\tilde{D}_{j}}(\tilde{w})& \displaystyle =& \displaystyle {\sum \limits_{i=1}^{m}}{\tilde{D}_{ij}}({\tilde{w}_{j}})\\ {} & \displaystyle =& \displaystyle {\sum \limits_{i=1}^{m}}{\sum \limits_{k=1}^{m}}\frac{{\tilde{w}_{j}}}{6}\big(\Delta \tilde{T}({\tilde{x}_{ij}},{\tilde{x}_{kj}})+\Delta \tilde{I}({\tilde{x}_{ij}},{\tilde{x}_{kj}})+\Delta \tilde{F}({\tilde{x}_{ij}},{\tilde{x}_{kj}})\big).\end{array}\]
Case 2b.
The information of attribute weights is partially known
In this case, we assume a non-linear programming model to calculate attribute weights.
(31)
\[ \text{M-3.}\hspace{2.5pt}\hspace{1em}\left\{\begin{array}{l}\max \tilde{D}(w)={\textstyle\sum \limits_{j=1}^{n}}{\textstyle\sum \limits_{i=1}^{m}}{\textstyle\sum \limits_{k=1}^{m}}\displaystyle \frac{{\tilde{w}_{j}}}{6}\left(\begin{array}{l}\Delta \tilde{T}({\tilde{x}_{ij}},{\tilde{x}_{kj}})+\Delta \tilde{I}({\tilde{x}_{ij}},{\tilde{x}_{kj}})\\ {} \phantom{\Delta \tilde{T}({\tilde{x}_{ij}},{\tilde{x}_{kj}})}+\Delta \tilde{F}({\tilde{x}_{ij}},{\tilde{x}_{kj}})\end{array}\right),\\ {} \text{subject to}\hspace{5pt}\tilde{w}\in \tilde{H},\hspace{2.5pt}{\tilde{w}_{j}}\geqslant 0,\hspace{2.5pt}{\textstyle\sum \limits_{j=1}^{n}}{\tilde{w}_{j}}=1,\hspace{2.5pt}j=1,2,\dots ,n,\end{array}\right.\]
where $ \tilde{H}$ is a set of partially known weight information.
Solving Model-3, we can get the optimal attribute weight vector.
Case 3b.
The information of attribute weights is completely unknown
In this case, we consider the following model:
(32)
\[ \text{M-4.}\hspace{2.5pt}\hspace{1em}\left\{\begin{array}{l}\max D(w)={\textstyle\sum \limits_{j=1}^{n}}{\textstyle\sum \limits_{i=1}^{m}}{\textstyle\sum \limits_{k=1}^{m}}\displaystyle \frac{{w_{j}}}{6}\left(\begin{array}{l}\Delta \tilde{T}({\tilde{x}_{ij}},{\tilde{x}_{kj}})+\Delta \tilde{I}({\tilde{x}_{ij}},{\tilde{x}_{kj}})\\ {} \phantom{\Delta \tilde{T}({\tilde{x}_{ij}},{\tilde{x}_{kj}})}+\Delta \tilde{F}({\tilde{x}_{ij}},{\tilde{x}_{kj}})\end{array}\right),\\ {} \text{s.t.}\hspace{2.5pt}\hspace{1em}{\tilde{w}_{j}}\geqslant 0,\hspace{2.5pt}{\textstyle\sum \limits_{j=1}^{n}}{\tilde{w}_{j}^{2}}=1,\hspace{2.5pt}j=1,2,\dots ,n.\end{array}\right.\]
The Lagrangian function corresponding to the above nonlinear programming problem is given by
(33)
\[ \tilde{L}(\tilde{w},\tilde{\lambda })={\sum \limits_{j=1}^{n}}{\sum \limits_{i=1}^{m}}{\sum \limits_{k=1}^{m}}\frac{{\tilde{w}_{j}}}{6}\left(\begin{array}{l}\Delta \tilde{T}({\tilde{x}_{ij}},{\tilde{x}_{kj}})+\Delta \tilde{I}({\tilde{x}_{ij}},{\tilde{x}_{kj}})\\ {} \phantom{\Delta \tilde{T}({\tilde{x}_{ij}},{\tilde{x}_{kj}})}+\Delta \tilde{F}({\tilde{x}_{ij}},\xi {x_{kj}})\end{array}\right)+\frac{\tilde{\lambda }}{12}\Bigg({\sum \limits_{j=1}^{n}}{\tilde{w}_{j}^{2}}-1\Bigg),\]
where $ \tilde{\lambda }$ is the Lagrange multiplier. Then the partial derivatives of $ \tilde{L}$ are computed as
(34)
\[\begin{array}{r@{\hskip4.0pt}c@{\hskip4.0pt}l}\displaystyle \frac{\partial \tilde{L}}{\partial {\tilde{w}_{j}}}& \displaystyle =& \displaystyle \frac{1}{6}{\sum \limits_{i=1}^{m}}{\sum \limits_{k=1}^{m}}\left(\begin{array}{l}\Delta \tilde{T}({\tilde{x}_{ij}},{\tilde{x}_{kj}})+\Delta \tilde{I}({\tilde{x}_{ij}},{\tilde{x}_{kj}})\\ {} \phantom{\Delta \tilde{T}({\tilde{x}_{ij}},{\tilde{x}_{kj}})}+\Delta \tilde{F}({\tilde{x}_{ij}},{\tilde{x}_{kj}})\end{array}\right)+\frac{\lambda }{6}{\tilde{w}_{j}}=0,\end{array}\]
(35)
\[\begin{array}{r@{\hskip4.0pt}c@{\hskip4.0pt}l}\displaystyle \frac{\partial \tilde{L}}{\partial \tilde{\lambda }}& \displaystyle =& \displaystyle \frac{1}{12}\Bigg({\sum \limits_{j=1}^{n}}{\tilde{w}_{j}^{2}}-1\Bigg)=0.\end{array}\]
It follows from Eq. (34) that the weight $ {\tilde{w}_{j}}$ for $ i=1,2,\dots ,m$ is
(36)
\[ {\tilde{w}_{j}}=-\Bigg({\sum \limits_{i=1}^{m}}{\sum \limits_{k=1}^{m}}\big(\Delta \tilde{T}({\tilde{x}_{ij}},{\tilde{x}_{kj}})+\Delta I({\tilde{x}_{ij}},{\tilde{x}_{kj}})+\Delta \tilde{F}({\tilde{x}_{ij}},{\tilde{x}_{kj}})\big)\Bigg)/\tilde{\lambda }.\]
Putting $ {w_{j}}$ in Eq. (35), we get
(37)
\[ {\tilde{\lambda }^{2}}={\sum \limits_{j=1}^{n}}{\Bigg({\sum \limits_{i=1}^{m}}{\sum \limits_{k=1}^{m}}\big(\Delta \tilde{T}({\tilde{x}_{ij}},{\tilde{x}_{kj}})+\Delta \tilde{I}({\tilde{x}_{ij}},{\tilde{x}_{kj}})+\Delta \tilde{F}({\tilde{x}_{ij}},{\tilde{x}_{kj}})\big)\Bigg)^{2}}\]
or
(38)
\[ \tilde{\lambda }=-\sqrt{{\sum \limits_{j=1}^{n}}{\Bigg({\sum \limits_{i=1}^{m}}{\sum \limits_{k=1}^{m}}\big(\Delta \tilde{T}({\tilde{x}_{ij}},{\tilde{x}_{kj}})+\Delta \tilde{I}({\tilde{x}_{ij}},{\tilde{x}_{kj}})+\Delta \tilde{F}({\tilde{x}_{ij}},{\tilde{x}_{kj}})\big)\Bigg)^{2}}},\]
where $ \tilde{\lambda }<0$ and $ {\textstyle\sum _{i=1}^{m}}{\textstyle\sum _{k=1}^{m}}(\tilde{\Delta }T({\tilde{x}_{ij}},{\tilde{x}_{kj}})+\Delta \tilde{I}({\tilde{x}_{ij}},{\tilde{x}_{kj}})+\Delta \tilde{F}({\tilde{x}_{ij}},{\tilde{x}_{kj}}))$ represents the sum of deviations of all the attributes with respect to the j-th attribute and $ {\textstyle\sum _{j=1}^{n}}{({\textstyle\sum _{i=1}^{m}}{\textstyle\sum _{k=1}^{m}}(\tilde{\Delta }T({\tilde{x}_{ij}},{\tilde{x}_{kj}})+\Delta \tilde{I}({\tilde{x}_{ij}},{\tilde{x}_{kj}})+\Delta \tilde{F}({\tilde{x}_{ij}},{\tilde{x}_{kj}})))^{2}}$ represents the sum of deviations of all the alternatives with respect to all the attributes.
Then combining Eqs. (36) and (38), we obtain the weight $ {\tilde{w}_{j}}$ $ (j=1,2,\dots ,n)$ as
(39)
\[ {\tilde{w}_{j}}=\frac{{\textstyle\textstyle\sum _{i=1}^{m}}{\textstyle\textstyle\sum _{k=1}^{m}}(\Delta \tilde{T}({\tilde{x}_{ij}},{\tilde{x}_{kj}})+\Delta \tilde{I}({\tilde{x}_{ij}},{\tilde{x}_{kj}})+\Delta \tilde{F}({\tilde{x}_{ij}},{\tilde{x}_{kj}}))}{\sqrt{{\textstyle\textstyle\sum _{j=1}^{n}}{({\textstyle\textstyle\sum _{i=1}^{m}}{\textstyle\textstyle\sum _{k=1}^{m}}(\tilde{\Delta }T({\tilde{x}_{ij}},{\tilde{x}_{kj}})+\Delta \tilde{I}({\tilde{x}_{ij}},{\tilde{x}_{kj}})+\Delta \tilde{F}({\tilde{x}_{ij}},{\tilde{x}_{kj}})))^{2}}}}.\]
We make the sum of $ {w_{j}}$ $ (j=1,2,\dots ,n)$ into a unit to normalize the weight of the j-th attribute:
(40)
\[ {\tilde{w}_{j}^{N}}=\frac{{\tilde{w}_{j}}}{{\textstyle\textstyle\sum _{j=1}^{n}}{\tilde{w}_{j}}},\hspace{1em}j=1,2,\dots ,n;\]
and consequently, we obtain the weight vector of the attribute as
\[ \tilde{W}=({\tilde{w}_{1}^{N}},{\tilde{w}_{2}^{N}},\dots ,{\tilde{w}_{n}^{N}})\]
for proceeding to Step-2.
Step 2.
Determine the positive ideal alternative and the negative ideal alternative.
From decision matrix $ \tilde{X}={({\tilde{x}_{ij}})_{m\times n}}$, we determine the interval neutrosophic hesitant fuzzy positive ideal solution (INHFPIS) $ {A^{+}}$ and the interval neutrosophic hesitant fuzzy negative ideal solution (INHFNIS) $ {A^{-}}$ of alternatives as follows:
(41)
\[\begin{array}{l}\displaystyle {\tilde{A}^{+}}=\big({\tilde{A}_{1}^{+}},{\tilde{A}_{2}^{+}},\dots ,{\tilde{A}_{n}^{+}}\big)\\ {} \displaystyle \phantom{{\tilde{A}^{+}}}=\Big\{\Big\langle \underset{i}{\max }\big\{{\tilde{\gamma }_{ij}^{\sigma (p)}}\big\},\underset{i}{\min }\big\{{\tilde{\delta }_{ij}^{\sigma (q)}}\big\},\underset{i}{\min }\big\{{\tilde{\eta }_{ij}^{\sigma (r)}}\big\}\Big\rangle |i=1,2,\dots ,m\hspace{2.5pt}\text{and}\hspace{5pt}j=1,2,\dots ,n\Big\},\end{array}\]
(42)
\[\begin{array}{l}\displaystyle {\tilde{A}^{-}}=\big({\tilde{A}_{1}^{-}},{\tilde{A}_{2}^{-}},\tilde{\dots },{A_{n}^{-}}\big)\\ {} \displaystyle \phantom{{\tilde{A}^{-}}}=\Big\{\Big\langle \underset{i}{\min }\big\{{\tilde{\gamma }_{ij}^{\sigma (p)}}\big\},\underset{i}{\max }\big\{{\tilde{\delta }_{ij}^{\sigma (q)}}\big\},\underset{i}{\max }\big\{{\tilde{\eta }_{ij}^{\sigma (r)}}\big\}\Big\rangle |i=1,2,\dots ,m\hspace{2.5pt}\text{and}\hspace{5pt}j=1,2,\dots ,n\Big\}.\end{array}\]
Here, we compare the attribute values $ {\tilde{x}_{ij}}$ by using score, accuracy and certainty values of INHFSs defined in Definition 9.
Step 3.
Determine the distance measure from the ideal alternatives to each alternative.
We determine the distance measure between the positive ideal alternative $ {A^{+}}$ and the alternative $ {A_{i}}$ $ (i=1,2,\dots ,m)$ as follows:
(43)
\[\begin{array}{r@{\hskip4.0pt}c@{\hskip4.0pt}l}\displaystyle {\tilde{D}_{i}^{+}}& \displaystyle =& \displaystyle {\sum \limits_{j=1}^{n}}{\tilde{w}_{j}}\tilde{D}\big({\tilde{x}_{ij}},{\tilde{x}_{j}^{+}}\big)\\ {} & \displaystyle =& \displaystyle \displaystyle \frac{{\tilde{w}_{j}}}{6}\left(\begin{array}{l}\big|\frac{1}{{l_{{\tilde{t}_{ij}}}}}{\textstyle\sum _{{\tilde{\gamma }_{ij}}\in {\tilde{t}_{ij}}}}{\gamma _{ij}^{L}}-\frac{1}{{l_{{\tilde{t}_{j}^{+}}}}}{\textstyle\sum _{{\tilde{\gamma }_{j}^{+}}\in {\tilde{t}_{j}^{+}}}}{\gamma _{j}^{L+}}\big|+\big|\frac{1}{{l_{{\tilde{t}_{ij}}}}}{\textstyle\sum _{{\tilde{\gamma }_{ij}}\in {\tilde{t}_{ij}}}}{\gamma _{ij}^{U}}-\frac{1}{{l_{{\tilde{t}_{j}^{+}}}}}{\textstyle\sum _{{\tilde{\gamma }_{j}^{+}}\in {\tilde{t}_{j}^{+}}}}{\gamma _{j}^{U+}}\big|\\ {} \hspace{1em}+|\frac{1}{{l_{{\tilde{t}_{ij}}}}}{\textstyle\sum _{{\tilde{\delta }_{ij}}\in {\tilde{t}_{ij}}}}{\delta _{ij}^{L}}-\frac{1}{{l_{{\tilde{t}_{j}^{+}}}}}{\textstyle\sum _{{\tilde{\delta }_{j}^{+}}\in {\tilde{t}_{j}^{+}}}}{\delta _{j}^{L+}}\big|+\big|\frac{1}{{l_{{\tilde{t}_{ij}}}}}{\textstyle\sum _{{\tilde{\delta }_{ij}}\in {\tilde{t}_{ij}}}}{\delta _{ij}^{U}}-\frac{1}{{l_{{\tilde{t}_{j}^{+}}}}}{\textstyle\sum _{{\tilde{\delta }_{j}^{+}}\in {\tilde{t}_{j}^{+}}}}{\delta _{j}^{U+}}\big|\\ {} \hspace{1em}+\big|\frac{1}{{l_{{\tilde{t}_{ij}}}}}{\textstyle\sum _{{\tilde{\eta }_{ij}}\in {\tilde{t}_{ij}}}}{\eta _{ij}^{L}}-\frac{1}{{l_{{\tilde{t}_{j}^{+}}}}}{\textstyle\sum _{{\tilde{\eta }_{j}^{+}}\in {\tilde{t}_{j}^{+}}}}{\eta _{j}^{L+}}\big|+\big|\frac{1}{{l_{{\tilde{t}_{ij}}}}}{\textstyle\sum _{{\tilde{\eta }_{ij}}\in {\tilde{t}_{ij}}}}{\eta _{ij}^{U}}-\frac{1}{{l_{{\tilde{t}_{j}^{+}}}}}{\textstyle\sum _{{\tilde{\eta }_{j}^{+}}\in {\tilde{t}_{j}^{+}}}}{\eta _{j}^{U+}}\big|\end{array}\right)\end{array}\]
for $ i=1,2,\dots ,m$. Similarly, we determine the distance measure between the negative ideal alternative $ {A^{-}}$ and the alternative $ {A_{i}}$ $ (i=1,2,\dots ,m)$ as follows:
(44)
\[\begin{array}{r@{\hskip4.0pt}c@{\hskip4.0pt}l}\displaystyle {\tilde{D}_{i}^{-}}& \displaystyle =& \displaystyle {\sum \limits_{j=1}^{n}}{\tilde{w}_{j}}\tilde{D}\big({\tilde{x}_{ij}},{\tilde{x}_{j}^{-}}\big)\\ {} & \displaystyle =& \displaystyle \displaystyle \frac{{\tilde{w}_{j}}}{6}\left(\begin{array}{l}\big|\frac{1}{{l_{{\tilde{t}_{ij}}}}}{\textstyle\sum _{{\tilde{\gamma }_{ij}}\in {\tilde{t}_{ij}}}}{\gamma _{ij}^{L}}-\frac{1}{{l_{{\tilde{t}_{j}^{+}}}}}{\textstyle\sum _{{\tilde{\gamma }_{j}^{+}}\in {\tilde{t}_{j}^{+}}}}{\gamma _{j}^{L+}}\big|+\big|\frac{1}{{l_{{\tilde{t}_{ij}}}}}{\textstyle\sum _{{\tilde{\gamma }_{ij}}\in {\tilde{t}_{ij}}}}{\gamma _{ij}^{U}}-\frac{1}{{l_{{\tilde{t}_{j}^{+}}}}}{\textstyle\sum _{{\tilde{\gamma }_{j}^{-}}\in {\tilde{t}_{j}^{+}}}}{\gamma _{j}^{U-}}\big|\\ {} \hspace{1em}+\big|\frac{1}{{l_{{\tilde{t}_{ij}}}}}{\textstyle\sum _{{\tilde{\delta }_{ij}}\in {\tilde{t}_{ij}}}}{\delta _{ij}^{L}}-\frac{1}{{l_{{\tilde{t}_{j}^{-}}}}}{\textstyle\sum _{{\tilde{\delta }_{j}^{-}}\in {\tilde{t}_{j}^{-}}}}{\delta _{j}^{L-}}\big|+\big|\frac{1}{{l_{{\tilde{t}_{ij}}}}}{\textstyle\sum _{{\tilde{\delta }_{ij}}\in {\tilde{t}_{ij}}}}{\delta _{ij}^{U}}-\frac{1}{{l_{{\tilde{t}_{j}^{-}}}}}{\textstyle\sum _{{\tilde{\delta }_{j}^{-}}\in {\tilde{t}_{j}^{-}}}}{\delta _{j}^{U-}}\big|\\ {} \hspace{1em}+\big|\frac{1}{{l_{{\tilde{t}_{ij}}}}}{\textstyle\sum _{{\tilde{\eta }_{ij}}\in {\tilde{t}_{ij}}}}{\eta _{ij}^{L}}-\frac{1}{{l_{{\tilde{t}_{j}^{-}}}}}{\textstyle\sum _{{\tilde{\eta }_{j}^{-}}\in {\tilde{t}_{j}^{-}}}}{\eta _{j}^{L-}}\big|+\big|\frac{1}{{l_{{\tilde{t}_{ij}}}}}{\textstyle\sum _{{\tilde{\eta }_{ij}}\in {\tilde{t}_{ij}}}}{\eta _{ij}^{U}}-\frac{1}{{l_{{\tilde{t}_{j}^{-}}}}}{\textstyle\sum _{{\tilde{\eta }_{j}^{-}}\in {\tilde{t}_{j}^{-}}}}{\eta _{j}^{U-}}\big|\end{array}\right).\end{array}\]
Step 4.
Determine the closeness coefficient.
In this step, we calculate closeness coefficient $ {C_{i}}$ for each alternative $ {A_{i}}$ $ (i=1,2,\dots ,m)$ with respect to INHFPIS $ {\tilde{A}^{+}}$ as given below:
(45)
\[ \tilde{R}{C_{i}}=\frac{{\tilde{D}_{i}^{-}}}{{\tilde{D}_{i}^{+}}+{\tilde{D}_{i}^{-}}}\hspace{1em}\text{for}\hspace{5pt}i=1,2,\dots ,m,\]
where $ 0\leqslant {\tilde{C}_{i}}\leqslant 1$ $ (i=1,2,\dots ,m)$. We observe that the alternative $ {A_{i}}$ is closer to the INHFPIS $ {\tilde{A}^{+}}$ and farther to the INHFNIS $ {A^{-}}$ as $ {\tilde{C}_{i}}$ approaches unity.
Step 5.
Rank the alternatives.
Finally, we can rank the alternatives according to the descending order of relative closeness coefficient values of alternatives to choose the best alternative from a set of feasible alternatives.
Step 6.
End.
We briefly present the steps of the proposed strategies in Fig. 1.
infor392_g001.jpg
Fig. 1.
The schematic diagram of the proposed method.

5 Numerical Examples

In this section, we consider two examples to illustrate the utility of the proposed method for single valued neutrosophic hesitant fuzzy set (SVNHFS) and interval hesitant fuzzy set (INHFS).

5.1 Example for SVNHFS

Suppose that an investment company wants to invest a sum of money in the following four alternatives:
  • • car company $ ({A_{1}})$;
  • • food company $ ({A_{2}})$;
  • • computer company $ ({A_{3}})$;
  • • arms company $ ({A_{4}})$.
The company considers the following three attributes to make the decision:
  • • risk analysis $ ({C_{1}})$;
  • • growth analysis $ ({C_{2}})$;
  • • environment impact analysis $ ({C_{3}})$.
We assume the rating values of the alternatives $ {A_{i}}$, $ i=1,2,3,4$ with respect to attributes $ {C_{j}}$, $ j=1,2,3$ and get the SVNHFS matrix presented in Table 1. The steps to get the best alternative are as follows:
Table 1
Single valued neutrosophic hesitant fuzzy decision matrix.
$ {C_{1}}$ $ {C_{2}}$ $ {C_{3}}$
$ {A_{1}}$ $ \langle \{0.3,0.4,0.5\},\{0.1\},\{0.3,0.4\}\rangle $ $ \langle \{0.5,0.6\},\{0.2,0.3\},\{0.3,0.4\}\rangle $ $ \langle \{0.2,0.3\},\{0.1,0.2\},\{0.5,0.6\}\rangle $
$ {A_{2}}$ $ \langle \{0.6,0.7\},\{0.1,0.2\},\{0.2,0.3\}\rangle $ $ \langle \{0.6,0.7\},\{0.1\},\{0.3\}\rangle $ $ \langle \{0.6,0.7\},\{0.1,0.2\},\{0.1,0.2\}\rangle $
$ {A_{3}}$ $ \langle \{0.5,0.6\},\{0.4\},\{0.2,0.3\}\rangle $ $ \langle \{0.6\},\{0.3\},\{0.4\}\rangle $ $ \langle \{0.5,06\},\{0.1\},\{0.3\}\rangle $
$ {A_{4}}$ $ \langle \{0.7,0.8\},\{0.1\},\{0.1,0.2\}\rangle $ $ \langle \{0.6,0.7\},\{0.1\},\{0.2\}\rangle $ $ \langle \{0.3,0.5\},\{0.2\},\{0.1,0.2,0.3\}\rangle $
Step 1: Determine the weights of attributes.
There are three cases for attribute weights:
Case 1: When the attribute weights are completely known, let the weight vector be $ {w^{N}}=(0.35,0.25,0.40)$.
Case 2: When the attribute weights are partially known, the weight information is as follows:
\[ H=\left\{\begin{array}{l}0.30\leqslant {w_{1}}\leqslant 0.40,\\ {} 0.20\leqslant {w_{2}}\leqslant 0.30,\\ {} 0.35\leqslant {w_{3}}\leqslant 0.45,\\ {} \text{and}\hspace{2.5pt}\hspace{2.5pt}{w_{1}}+{w_{2}}+{w_{3}}=1.\end{array}\right.\]
Using Model-1, we get the single objective programming problem as
\[ \left\{\begin{array}{l}\max (D)=1.796{w_{1}}+1.164{w_{2}}+1.962{w_{3}},\\ {} \text{subject to}\hspace{5pt}w\in H\hspace{2.5pt}\text{and}\hspace{5pt}{\textstyle\textstyle\sum _{j=1}^{3}}{w_{j}}=1,\hspace{2.5pt}{w_{j}}>0\hspace{2.5pt}\text{for}\hspace{5pt}j=1,2,3.\end{array}\right.\]
Solving this problem with optimization software LINGO 11, we get the optimal weight vector as $ {w^{N}}=(0.35,0.20,0.45)$.
Case 3: When the attribute weights are completely unknown, using Model-2 and Eqs. (20) and (21), we obtain the following weight vector:
\[ {w^{N}}=(0.351,0.265,0.384).\]
Step 2: Determine the positive ideal alternative and the negative ideal alternative.
In this step, we calculate the positive and the negative ideal solutions from Eqs. (22) and (23), respectively.
(46)
\[\begin{array}{r@{\hskip4.0pt}c@{\hskip4.0pt}l}\displaystyle {A^{+}}& \displaystyle =& \displaystyle \big({A_{1}^{+}},{A_{2}^{+}},{A_{3}^{+}}\big)\\ {} & \displaystyle =& \displaystyle \left(\begin{array}{c}\langle \{0.7,0.8\},\{0.1\},\{0.1,0.2\}\rangle ,\langle \{0.6,0.7\},\{0.1\},\{0.2\}\rangle ,\\ {} \langle \{0.6,0.7\},\{0.1,0.2\},\{0.1,0.2\}\rangle \end{array}\right),\end{array}\]
(47)
\[\begin{array}{r@{\hskip4.0pt}c@{\hskip4.0pt}l}\displaystyle {A^{-}}& \displaystyle =& \displaystyle \big({A_{1}^{-}},{A_{2}^{-}},{A_{3}^{-}}\big),\\ {} & \displaystyle =& \displaystyle \left(\begin{array}{c}\langle \{0.5,0.6\},\{0.4\},\{0.2,0.3\}\rangle ,\langle \{0.6\},\{0.3\},\{0.4\}\rangle ,\\ {} \langle \{0.2,0.3\},\{0.1,0.2\},\{0.5,0.6\}\rangle \end{array}\right).\end{array}\]
Step 3: Determine the distance measure from the ideal alternatives to each alternative.
In this step, we determine the distance measure from the positive and negative ideal solutions from Eqs. (24) and (25) as given in Tables 2 and 3.
Table 2
Distance measure from the positive ideal solution.
$ {D^{+}}({A_{i}})$ Case 1 Case 2 Case 3
$ {D_{1}^{+}}$ 0.210 0.210 0.201
$ {D_{2}^{+}}$ 0.037 0.035 0.037
$ {D_{3}^{+}}$ 0.140 0.145 0.148
$ {D_{4}^{+}}$ 0.046 0.052 0.044
Table 3
Distance measure from the negative ideal solution.
$ {D^{-}}({A_{i}})$ Case 1 Case 2 Case 3
$ {D_{1}^{-}}$ 0.180 0.164 0.198
$ {D_{2}^{-}}$ 0.176 0.183 0.173
$ {D_{3}^{-}}$ 0.120 0.115 0.102
$ {D_{4}^{-}}$ 0.181 0.182 0.180
Step 4: Determine the relative closeness coefficient.
We now calculate the relative closeness coefficients from Eq. (26) and the results are shown in Table 4.
Table 4
Relative closeness coefficient.
$ RC({A_{i}})$ Case 1 Case 2 Case 3
$ RC({A_{1}})$ 0.461 0.438 0.496
$ RC({A_{2}})$ 0.826 0.839 0.823
$ RC({A_{3}})$ 0.462 0.451 0.408
$ RC({A_{4}})$ 0.796 0.778 0.800
Step 5: Rank the alternatives.
From Table 4, ranks of the alternatives are determined as follows:
\[\begin{array}{l}\displaystyle \textbf{Case 1:}\hspace{2.5pt}\hspace{2.5pt}{A_{2}}\succ {A_{4}}\succ {A_{3}}\succ {A_{1}},\\ {} \displaystyle \textbf{Case 2:}\hspace{2.5pt}\hspace{2.5pt}{A_{2}}\succ {A_{4}}\succ {A_{3}}\succ {A_{1}},\\ {} \displaystyle \textbf{Case 3:}\hspace{2.5pt}\hspace{2.5pt}{A_{2}}\succ {A_{4}}\succ {A_{1}}\succ {A_{3}}.\end{array}\]
The above shows that $ {A_{2}}$ is the best alternative for all cases.
Step 6: End.

5.2 Example for INHFS

In order to demonstrate the proposed method for INHFS, we consider the same numerical example for SVNHFS but the rating values of the attributes are INHFS. The INHFS based decision matrix is presented in Table 5.
Step 1: Determine the weights of attributes.
Table 5
Intervalneutrosophic hesitant fuzzy decision matrix.
$ {C_{1}}$ $ {C_{2}}$ $ {C_{3}}$
$ {A_{1}}$ $ \left\{\begin{array}{c}\{[0.3,0.4],[0.4,0.5]\}\\ {} \{[0.1,0.2]\}\\ {} \{[0.3,0.4]\}\end{array}\right\}$ $ \left\{\begin{array}{c}\{[0.4,0.5],[0.5,0.6]\}\\ {} \{[0.2,0.3]\}\\ {} \{[0.3,0.3],[0.3,0.4]\}\end{array}\right\}$ $ \left\{\begin{array}{c}\{[0.3,0.5]\}\\ {} \{[0.2,0.3]\}\\ {} \{[0.1,0.2],[0.3,0.3]\}\end{array}\right\}$
$ {A_{2}}$ $ \left\{\begin{array}{c}\{[0.6,0.7]\}\\ {} \{[0.1,0.2]\}\\ {} \{[0.1,0.2],[0.2,0.3]\}\end{array}\right\}$ $ \left\{\begin{array}{c}\{[0.6,0.7]\}\\ {} \{[0.1,0.2]\}\\ {} \{[0.2,0.3]\}\end{array}\right\}$ $ \left\{\begin{array}{c}\{[0.6,0.7]\}\\ {} \{[0.1,0.2]\}\\ {} \{[0.1,0.2]\}\end{array}\right\}$
$ {A_{3}}$ $ \left\{\begin{array}{c}\{[0.3,0.4],[0.5,0.6]\}\\ {} \{[0.2,0.4]\}\\ {} \{[0.2,0.3]\}\end{array}\right\}$ $ \left\{\begin{array}{c}\{[0.6,0.7]\}\\ {} \{[0.0,0.1]\}\\ {} \{[0.2,0.3]\}\end{array}\right\}$ $ \left\{\begin{array}{c}\{[0.5,06]\}\\ {} \{[0.1,0.2],[0.2,0.3]\}\\ {} \{[0.2,0.3]\}\end{array}\right\}$
$ {A_{4}}$ $ \left\{\begin{array}{c}\{[0.7,0.8]\}\\ {} \{[0.0,0.1]\}\\ {} \{[0.1,0.2]\}\end{array}\right\}$ $ \left\{\begin{array}{c}\{[0.5,0.6]\}\\ {} \{[0.2,0.3]\}\\ {} \{[0.3,0.4]\}\end{array}\right\}$ $ \left\{\begin{array}{c}\{[0.2,0.3]\}\\ {} \{[0.1,0.2]\}\\ {} \{[0.4,0.5],[0.5,0.6]\}\end{array}\right\}$
Here, we consider completely known, partially known and completely unknown attribute weights in three cases.
Case 1: When the attribute weights are known in advance, let the weight vector be
\[ {\bar{w}^{N}}=(0.30,0.25,0.45).\]
Case 2: When the attribute weights are partially known, the weight information is as follows:
\[ \tilde{H}=\left\{\begin{array}{l}0.30\leqslant {\tilde{w}_{1}}\leqslant 0.40,\\ {} 0.20\leqslant {\tilde{w}_{2}}\leqslant 0.30,\\ {} 0.35\leqslant {\tilde{w}_{3}}\leqslant 0.45,\\ {} \text{and}\hspace{5pt}{\tilde{w}_{1}}+{\tilde{w}_{2}}+{\tilde{w}_{3}}=1.\end{array}\right.\]
Now, with the help of Model-3, we consider the following optimization problem:
\[ \left\{\begin{array}{l}\max (D)=1.7626{\tilde{w}_{1}}+1.526{\tilde{w}_{2}}+1.848{\tilde{w}_{3}},\\ {} \text{subject to}\hspace{5pt}\tilde{w}\in \tilde{H}\hspace{2.5pt}\text{and}\hspace{5pt}{\textstyle\textstyle\sum _{j=1}^{3}}{\tilde{w}_{j}}=1,\hspace{2.5pt}{\tilde{w}_{j}}>0\hspace{2.5pt}\text{for}\hspace{5pt}j=1,2,3.\end{array}\right.\]
Solving this problem with optimization software LINGO 11, we get the optimal weight vector as
\[ {\bar{w}^{N}}=(0.35,0.20,0.45).\]
Case 3: When the attribute weights are completely unknown, using Model-2 and Eqs. (39) and (40), we obtain the following weight vector:
\[ {\bar{w}^{N}}=(0.343,0.297,0.360).\]
Step 2: Determine the positive ideal alternative and the negative ideal alternative.
In this step, we calculate the positive and the negative ideal solutions, where the positive ideal solution is the best solution and the negative ideal solution is the worst solution. From Eqs. (22) and (23), we get
(48)
\[\begin{array}{r@{\hskip4.0pt}c@{\hskip4.0pt}l}\displaystyle {\tilde{A}^{+}}& \displaystyle =& \displaystyle \big({\tilde{A}_{1}^{+}},{\tilde{A}_{2}^{+}},{\tilde{A}_{3}^{+}}\big)\\ {} & \displaystyle =& \displaystyle \left(\begin{array}{c}\langle \{[0.7,0.8]\},\{[0.0,0.1]\},\{[0.1,0.2]\}\rangle ,\\ {} \langle \{[0.6,0.7]\},\{[0.0,0.1]\},\{[0.2,0.3]\}\rangle ,\\ {} \langle \{[0.6,0.7]\},\{[0.1,0.2]\},\{[0.1,0.2]\}\rangle \end{array}\right),\end{array}\]
(49)
\[\begin{array}{r@{\hskip4.0pt}c@{\hskip4.0pt}l}\displaystyle {\tilde{A}^{-}}& \displaystyle =& \displaystyle \big({\tilde{A}_{1}^{-}},{\tilde{A}_{2}^{-}},{\tilde{A}_{3}^{-}}\big)\\ {} & \displaystyle =& \displaystyle \left(\begin{array}{c}\langle \{[0.3,0.4],[0.4,0.5]\},\{[0.1,0.2]\},\{[0.3,0.4]\}\rangle ,\\ {} \langle \{[0.4,0.5],[0.5,0.6]\},\{[0.2,0.3]\},\{[0.3,0.3],[0.3,0.4]\}\rangle ,\\ {} \langle \{[0.2,0.3]\},\{[0.1,0.2]\},\{[0.4,0.5],[0.5,0.6]\}\rangle \end{array}\right).\end{array}\]
Step 3: Determine the distance measure from the ideal alternatives to each alternative.
In this step, using Eqs. (43) and (44), we determine the distance measure from the positive ideal solution and the negative ideal solution as given in Tables 6 and 7, respectively.
Table 6
Distance measure from the positive ideal solution.
$ {\tilde{D}^{+}}({A_{i}})$ Case 1 Case 2 Case 3
$ {\tilde{D}_{1}^{+}}$ 0.164 0.168 0.167
$ {\tilde{D}_{2}^{+}}$ 0.032 0.035 0.037
$ {\tilde{D}_{3}^{+}}$ 0.102 0.113 0.104
$ {\tilde{D}_{4}^{+}}$ 0.146 0.139 0.129
Step 4: Determine the relative closeness coefficient.
Table 7
Distance measure from the negative ideal solution.
$ {\tilde{D}^{-}}({A_{i}})$ Case 1 Case 2 Case 3
$ {\tilde{D}_{1}^{-}}$ 0.078 0.079 0.063
$ {\tilde{D}_{2}^{-}}$ 0.179 0.180 0.168
$ {\tilde{D}_{3}^{-}}$ 0.155 0.153 0.148
$ {\tilde{D}_{4}^{-}}$ 0.071 0.080 0.082
We now calculate the relative closeness coefficient from Eq. (45). The results are shown in Table 8.
Step 5: Rank the alternatives.
Table 8
Relative closeness coefficient.
$ \tilde{RC({A_{i}})}$ Case 1 Case 2 Case 3
$ \tilde{RC({A_{1}})}$ 0.322 0.312 0.273
$ \tilde{RC({A_{2}})}$ 0.848 0.837 0.819
$ \tilde{RC({A_{3}})}$ 0.603 0.576 0.587
$ \tilde{RC({A_{4}})}$ 0.327 0.365 0.389
From Table 8, we obtain the ranks of the alternatives as follows:
\[\begin{array}{l}\displaystyle \textbf{Case 1 :}\hspace{2.5pt}\hspace{2.5pt}{A_{2}}\succ {A_{3}}\succ {A_{4}}\succ {A_{1}},\\ {} \displaystyle \textbf{Case 2 :}\hspace{2.5pt}\hspace{2.5pt}{A_{2}}\succ {A_{3}}\succ {A_{4}}\succ {A_{1}},\\ {} \displaystyle \textbf{Case 3 :}\hspace{2.5pt}\hspace{2.5pt}{A_{2}}\succ {A_{3}}\succ {A_{4}}\succ {A_{1}}.\end{array}\]
The above shows that $ {A_{2}}$ is the best alternative for all cases.
Step 6: End.

5.3 Comparative Analysis and Discussion

We divide this section into two parts. Firstly, we compare our proposed method with the existing methods for multi-attribute decision making under SVNHFS and then for INHFS.

5.3.1 SVNHFS

Ye (2015a) developed the method to find out the best alternative under single valued neutrosophic hesitant fuzzy environment, and Sahin and Liu (2017) proposed correlation coefficient of single valued neutrosophic hesitant fuzzy set for MADM. Rankings of the alternatives of the above existing method and our proposed method are shown in Table 9. When the attribute weights are known in advance, three methods result in the same ranking. However, when the attribute weights are partially known or completely unknown, the above two methods are not applicable.
Table 9
A comparison of the results under SVNHFS.
Methods Type of weight information Ranking result
Ye ’s (2015a) method Completely known $ {A_{2}}\succ {A_{4}}\succ {A_{3}}\succ {A_{1}}$
Sahin and Liu ’s (2017) method Completely known $ {A_{2}}\succ {A_{4}}\succ {A_{3}}\succ {A_{1}}$
Proposed method Completely known $ {A_{2}}\succ {A_{4}}\succ {A_{3}}\succ {A_{1}}$
Ye ’s (2015b) method Partially known Not applicable
Sahin and Liu ’s (2017) method Partially known Not applicable
Proposed method Partially known $ {A_{2}}\succ {A_{4}}\succ {A_{3}}\succ {A_{1}}$
Ye ’s (2015b) method Completely unknown Not applicable
Sahin and Liu ’s (2017) method Completely unknown Not applicable
Proposed method Completely unknown $ {A_{2}}\succ {A_{4}}\succ {A_{1}}\succ {A_{3}}$

5.3.2 INHFS

Liu and Shi (2015) proposed MADM method for the best alternative under interval neutrosophic hesitant fuzzy environment. Table 10 shows a comparison between Liu and Shi ’s (2015) method and our proposed method.
Table 10
A comparison of the results under INHFS.
Methods Type of weight information Ranking result
Liu and Shi’s method (2015) Completely known $ {A_{2}}\succ {A_{3}}\succ {A_{4}}\succ {A_{1}}$
Proposed method Completely known $ {A_{2}}\succ {A_{3}}\succ {A_{4}}\succ {A_{1}}$
Liu and Shi’s method (2015) Partially known Not applicable
Proposed method Partially known $ {A_{2}}\succ {A_{3}}\succ {A_{4}}\succ {A_{1}}$
Liu and Shi’s method (2015) Completely unknown Not applicable
Proposed method Completely unknown $ {A_{2}}\succ {A_{3}}\succ {A_{4}}\succ {A_{1}}$
The advantages of the proposed method for SVNHFS and INHFS are as follows:
  • • The existing methods are developed based on aggregation operator, correlation coefficient and hybrid weighted operator, but our proposed method is developed on the basis of deviation method.
  • • The proposed method offers more flexible choice of weight information because it is also applicable to partially known and unknown weight information.

6 Conclusion

Neutrosophic hesitant fuzzy set encompasses single valued neutrosophic set, interval neutrosophic set, hesitant fuzzy set, intuitionistic fuzzy set and fuzzy set. The neutrosophic set has three components: truth membership, falsity membership and indeterminacy membership functions. Therefore, neutrosophic hesitant fuzzy set is flexible to deal with imprecise, indeterminate and incomplete information for MADM problems. In this study, we have extended TOPSIS method for solving MADM problems under SVNHFS and INHFS environments. We have considered three types of weight information of attributes – completely known, partially known and completely unknown weight information. We have developed optimization models for calculating attribute weights for partially known, and completely unknown weight information with the help of maximizing deviation method. Finally, numerical examples have been given to support and illustrate the validation and efficiency of the proposed method. The proposed strategy can be extended to multi-attribute group decision making problem as well as the case when weight information is unknown. The developed model can be applied to many real decision making problems such as pattern recognition, supply chain management, data mining, etc. For future research, the proposed method can be extended in MADM problems with plithogenic set (Smarandache, 2017).

Acknowledgements

The authors would like to thank the anonymous reviewers for their valuable comments and suggestions on an earlier version of this paper.

References

 
Atanassov, K.T. (1986). Intuitionistic fuzzy sets. Fuzzy Sets and Systems, 20(1), 87–96.
 
Beg, I., Rashid, T. (2013). TOPSIS for hesitant fuzzy linguistic term sets. International Journal of Intelligent Systems, 28(12), 1162–1171.
 
Biswas, P., Pramanik, S., Giri, B.C. (2016a). TOPSIS method for multi-attribute group decision-making under single-valued neutrosophic environment. Neural Computing and Applications, 27(3), 727–737.
 
Biswas, P., Pramanik, S., Giri, B.C. (2016b). GRA method of multiple attribute decision making with single valued neutrosophic hesitant fuzzy set information. New Trends in Neutrosophic Theory and Applications, Brussells, Pons Editions, 55–63.
 
Biswas, P., Pramanik, S., Giri, B.C. (2018). TOPSIS strategy for multi-attribute decision making with trapezoidal neutrosophic numbers Neutrosophic. Sets and Systems, 19, 29–39.
 
Biswas, P., Pramanik, S., Giri, B.C. (2019a). Non-linear programming approach for single-valued neutrosophic TOPSIS method. New Mathematics and Natural Computation. https://doi.org/10.1142/S1793005719500169.
 
Biswas, P., Pramanik, S., Giri, B.C. (2019b). NH-MADM strategy in neutrosophic hesitant fuzzy set environment based on extended GRA. Informatica, 30(2), 213–242.
 
Boran, F.E., Genç, S., Kurt, M., Akay, D. (2009). A multi-criteria intuitionistic fuzzy group decision making for supplier selection with TOPSIS method. Expert Systems with Applications, 36(8), 11363–11368.
 
Brans, J.P., Vincke, P., Mareschal, B. (1986). How to select and how to rank projects: the PROMETHEE method. European Journal of Operational Research, 24(2), 228–238.
 
Brauers, W.K., Zavadskas, E.K. (2006). The MOORA method and its application to privatization in a transition economy. Control and Cybernetics, 35, 445–469.
 
Brauers, W.K.M., Zavadskas, E.K. (2010). Project management by MULTIMOORA as an instrument for transition economies. Technological and Economic Development of Economy, 16(1), 5–24.
 
Chen, C.T. (2000). Extensions of the TOPSIS for group decision-making under fuzzy environment. Fuzzy Sets and Systems, 114(1), 1–9.
 
Chen, N., Xu, Z., Xia, M. (2013). Correlation coefficients of hesitant fuzzy sets and their applications to clustering analysis. Applied Mathematical Modelling, 37(4), 2197–2211.
 
Chi, P., Liu, P. (2013). An extended TOPSIS method for the multiple attribute decision making problems based on interval neutrosophic set. Neutrosophic Sets and Systems, 1(1), 63–70.
 
Fu, Z., Liao, H. (2019). Unbalanced double hierarchy linguistic term set: the TOPSIS method for multi-expert qualitative decision making involving green mine selection. Information Fusion, 51, 271–286.
 
Gabus, A., Fontela, E. (1972). World problems. In: An Invitation to Further Thought Within the Framework of DEMATEL, Battelle Geneva Research Center, Geneva, Switzerland, pp. 1–8.
 
Giri, B.C., Molla, M.U., Biswas, P. (2018). TOPSIS method for MADM based on interval trapezoidal neutrosophic number. Neutrosophic Sets and Systems, 22, 151.
 
Gomes, L.F.A.M., Lima, M.M.P.P. (1992a). TODIM: basics and application to multi–criteria ranking of projects with environmental impacts. Foundations of Computing and Decision Sciences, 16(4), 113–127.
 
Gomes, L.F.A.M., Lima, M.M.P.P. (1992b). From modeling individual preferences to multi–criteria ranking of discrete alternatives: a look at prospect theory and the additive difference model. Foundations of Computing and Decision Sciences, 17(3), 171–184.
 
Hafezalkotob, A., Hafezalkotob, A., Liao, H., Herrera, F. (2019). An overview of MULTIMOORA for multi-criteria decision-making: theory, developments, applications, and challenges. Information Fusion, 51, 145–177.
 
Haibin, W., Smarandache, F., Zhang, Y., Sunderraman, R. (2010). Single valued neutrosophic sets. Multispace and Multi-Structure, 4, 410–413.
 
Hwang, C.L., Yoon, K. (1981). Methods for multiple attribute decision making. In: Multiple Attribute Decision Making. Springer, Berlin, Heidelberg, 48–191.
 
Ji, P., Zhang, H.Y., Wang, J.Q. (2018). A projection-based TODIM method under multi-valued neutrosophic environments and its application in personnel selection. Neural Computing and Applications, 29(1), 221–234.
 
Joshi, D., Kumar, S. (2016). Interval-valued intuitionistic hesitant fuzzy Choquet integral based TOPSIS method for multi-criteria group decision making. European Journal of Operational Research, 248(1), 183–191.
 
Kahraman, C., Otay, İ. (Eds.) (2019). Fuzzy Multi-criteria Decision-Making Using Neutrosophic Sets. Springer.
 
Kanapeckiene, L., Kaklauskas, A., Zavadskas, E.K., Raslanas, S. (2011). Method and system for multi-attribute market value assessment in analysis of construction and retrofit projects. Expert Systems with Applications, 38(11), 14196–14207.
 
Keshavarz Ghorabaee, M., Zavadskas, E.K., Olfat, L., Turskis, Z. (2015). Multi-criteria inventory classification using a new method of evaluation based on distance from average solution (EDAS). Informatica, 26(3), 435–451.
 
Liu, P., Shi, L. (2015). The generalized hybrid weighted average operator based on interval neutrosophic hesitant set and its application to multiple attribute decision making. Neural Computing and Applications, 26(2), 457–471.
 
Liao, H., Xu, Z. (2015). Approaches to manage hesitant fuzzy linguistic information based on the cosine distance and similarity measures for HFLTSs and their application in qualitative decision making. Expert Systems with Applications, 42(12), 5328–5336.
 
Liao, H., Wu, X. (2019). DNMA: a double normalization-based multiple aggregation method for multi-expert multi-criteria decision making. Omega. https://doi.org/10.1016/j.omega.2019.04.001.
 
Mi, X., Tang, M., Liao, H., Shen, W., Lev, B. (2019). The state-of-the-art survey on integrations and applications of the best worst method in decision making: why, what, what for and what’s next? Omega. https://doi.org/10.1016/j.omega.2019.01.009.
 
Opricovic, S., Tzeng, G.H. (2004). Compromise solution by MCDM methods: a comparative analysis of VIKOR and TOPSIS. European Journal of Operational Research, 156(2), 445–455.
 
Peng, J.J., Wang, J.Q., Zhang, H.Y., Chen, X.H. (2014). An outranking approach for multi-criteria decision-making problems with simplified neutrosophic sets. Applied Soft Computing, 25, 336–346.
 
Rodriguez, R.M., Martinez, L., Herrera, F. (2012). Hesitant fuzzy linguistic term sets for decision making. IEEE Transactions on Fuzzy Systems, 20(1), 109–119.
 
Roy, B. (1990). The outranking approach and the foundations of ELECTRE methods. In: Readings in Multiple Criteria Decision Aid. Springer, Berlin, Heidelberg, 155–183.
 
Sahin, R., Liu, P. (2017). Correlation coefficient of single-valued neutrosophic hesitant fuzzy sets and its applications in decision making. Neural Computing and Applications, 28(6), 1387–1395.
 
Satty, T.L. (1980). The Analytic Hierarchy Process. McGrawHill, New York.
 
Smarandache, F. (1998). In: Neutrosophy. Neutrosophic Probability, Set, and Logic, ProQuest Information & Learning, Ann Arbor, Michigan, USA (Vol. 105), pp. 118–123.
 
Smarandache, F. (2017). Plithogeny, Plithogenic Set, Logic, Probability, and Statistics. Infinite Study.
 
Stanujkic, D., Zavadskas, E.K., Smarandache, F., Brauers, W.K., Karabasevic, D. (2017). A neutrosophic extension of the MULTIMOORA method. Informatica, 28(1), 181–192.
 
Torra, V. (2010). Hesitant fuzzy sets. International Journal of Intelligent Systems, 25(6), 529–539.
 
Wang, H., Smarandache, F., Sunderraman, R., Zhang, Y.Q. (2005). Interval Neutrosophic Sets and Logic: Theory and Applications in Computing. Hexis, Phoenix, AZ.
 
Wei, G. (2012). Hesitant fuzzy prioritized operators and their application to multiple attribute decision making. Knowledge-Based Systems, 31, 176–182.
 
Wu, X., Liao, H. (2019). A consensus-based probabilistic linguistic gained and lost dominance score method. European Journal of Operational Research, 272(3), 1017–1027.
 
Xia, M., Xu, Z. (2011). Hesitant fuzzy information aggregation in decision making. International Journal of Approximate Reasoning, 52(3), 395–407.
 
Xu, Z., Xia, M. (2011a). Distance and similarity measures for hesitant fuzzy sets. Information Sciences, 181(11), 2128–2138.
 
Xu, Z., Xia, M. (2011b). On distance and correlation measures of hesitant fuzzy information. International Journal of Intelligent Systems, 26(5), 410–425.
 
Xu, Z., Zhang, X. (2013). Hesitant fuzzy multi-attribute decision making based on TOPSIS with incomplete weight information. Knowledge-Based Systems, 52, 53–64.
 
Ye, F. (2010). An extended TOPSIS method with interval-valued intuitionistic fuzzy numbers for virtual enterprise partner selection. Expert Systems with Applications, 37(10), 7050–7055.
 
Ye, J. (2014). Single valued neutrosophic cross-entropy for multi-criteria decision making problems. Applied Mathematical Modelling, 38(3), 1170–1175.
 
Ye, J. (2015a). An extended TOPSIS method for multiple attribute group decision making based on single valued neutrosophic linguistic numbers. Journal of Intelligent and Fuzzy Systems, 28(1), 247–255.
 
Ye, J. (2015b). Multiple-attribute decision-making method under a single-valued neutrosophic hesitant fuzzy environment. Journal of Intelligent Systems, 24(1), 23–36.
 
Ye, J. (2016). Correlation coefficients of interval neutrosophic hesitant fuzzy sets and its application in a multiple attribute decision making method. Informatica, 27(1), 179–202.
 
Yingming, W. (1997). Using the method of maximizing deviation to make decision for multi-indices. Journal of Systems Engineering and Electronics, 8(3), 21–26.
 
Zadeh, L.A. (1965). Fuzzy sets. Information and Control, 8(3), 338–355.
 
Zhang, N., Wei, G. (2013). Extension of VIKOR method for decision making problem based on hesitant fuzzy set. Applied Mathematical Modelling, 37(7), 4938–4947.
 
Zavadskas, E.K., Kaklauskas, A., Sarka, V. (1994). The new method of multicriteria complex proportional assessment of projects. Technological and Economic Development of Economy, 1(3), 131–139.
 
Zavadskas, E.K., Antucheviciene, J., Hajiagha, S.H.R., Hashemi, S.S. (2014). Extension of weighted aggregated sum product assessment with interval-valued intuitionistic fuzzy numbers (WASPAS-IVIF). Applied Soft Computing, 24, 1013–1021.
 
Zavadskas, E.K., Mardani, A., Turskis, Z., Jusoh, A., Nor, K.M. (2016). Development of TOPSIS method to solve complicated decision-making problems—an overview on developments from 2000 to 2015. International Journal of Information Technology & Decision Making, 15(03), 645–682.

Biographies

Giri Bibhas C.
bcgiri.jumath@gmail.com

B.C. Giri is a professor at the Department of Mathematics, Jadavpur University, Kolkata, India. He has published many high-level papers in international peer-reviewed journals. His current research interests include supply chain management, inventory theory, multiple criteria decision making, soft computing, and optimization.

Molla Mahatab Uddin
mahatab.jumath@gmail.com

M.U. Molla is a junior research fellow at the Department of Mathematics, Jadavpur University, Kolkata. He obtained his BSc and MSc in Mathematics from University of Calcutta. His research interests include multiple criteria decision making, fuzzy set, intuitionistic fuzzy set, and neutrosophic set.

Biswas Pranab
prabiswas.jdvu@gmail.com

P. Biswas obtained his bachelor degree in mathematics and master degree in applied mathematics from University of Kalyani, India. He obtained his PhD degree from Jadavpur University, India. His research interests include multiple criteria decision making, aggregation operators, soft computing, optimization, fuzzy set, intuitionistic fuzzy set, and neutrosophic set.


Reading mode PDF XML

Table of contents
  • 1 Introduction
  • 2 Preliminaries of Neutrosophic Sets and Single Valued Neutrosophic Set
  • 3 TOPSIS Method for MADM with SVNHFS Information
  • 4 TOPSIS Method for MADM with INHFS Information
  • 5 Numerical Examples
  • 6 Conclusion
  • Acknowledgements
  • References
  • Biographies

Copyright
© 2020 Vilnius University
by logo by logo
Open access article under the CC BY license.

Keywords
hesitant fuzzy set neutrosophic set single valued neutrosophic hesitant fuzzy set interval neutrosophic hesitant fuzzy set multi-attribute decision making TOPSIS

Funding
This research was supported by the Council of Scientific and Industrial Research (CSIR), Govt. of India, File No. 09/096(0945)/2018-EMR-I.

Metrics
since January 2020
1923

Article info
views

994

Full article
views

1104

PDF
downloads

359

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Figures
    1
  • Tables
    10
infor392_g001.jpg
Fig. 1.
The schematic diagram of the proposed method.
Table 1
Single valued neutrosophic hesitant fuzzy decision matrix.
Table 2
Distance measure from the positive ideal solution.
Table 3
Distance measure from the negative ideal solution.
Table 4
Relative closeness coefficient.
Table 5
Intervalneutrosophic hesitant fuzzy decision matrix.
Table 6
Distance measure from the positive ideal solution.
Table 7
Distance measure from the negative ideal solution.
Table 8
Relative closeness coefficient.
Table 9
A comparison of the results under SVNHFS.
Table 10
A comparison of the results under INHFS.
infor392_g001.jpg
Fig. 1.
The schematic diagram of the proposed method.
Table 1
Single valued neutrosophic hesitant fuzzy decision matrix.
$ {C_{1}}$ $ {C_{2}}$ $ {C_{3}}$
$ {A_{1}}$ $ \langle \{0.3,0.4,0.5\},\{0.1\},\{0.3,0.4\}\rangle $ $ \langle \{0.5,0.6\},\{0.2,0.3\},\{0.3,0.4\}\rangle $ $ \langle \{0.2,0.3\},\{0.1,0.2\},\{0.5,0.6\}\rangle $
$ {A_{2}}$ $ \langle \{0.6,0.7\},\{0.1,0.2\},\{0.2,0.3\}\rangle $ $ \langle \{0.6,0.7\},\{0.1\},\{0.3\}\rangle $ $ \langle \{0.6,0.7\},\{0.1,0.2\},\{0.1,0.2\}\rangle $
$ {A_{3}}$ $ \langle \{0.5,0.6\},\{0.4\},\{0.2,0.3\}\rangle $ $ \langle \{0.6\},\{0.3\},\{0.4\}\rangle $ $ \langle \{0.5,06\},\{0.1\},\{0.3\}\rangle $
$ {A_{4}}$ $ \langle \{0.7,0.8\},\{0.1\},\{0.1,0.2\}\rangle $ $ \langle \{0.6,0.7\},\{0.1\},\{0.2\}\rangle $ $ \langle \{0.3,0.5\},\{0.2\},\{0.1,0.2,0.3\}\rangle $
Table 2
Distance measure from the positive ideal solution.
$ {D^{+}}({A_{i}})$ Case 1 Case 2 Case 3
$ {D_{1}^{+}}$ 0.210 0.210 0.201
$ {D_{2}^{+}}$ 0.037 0.035 0.037
$ {D_{3}^{+}}$ 0.140 0.145 0.148
$ {D_{4}^{+}}$ 0.046 0.052 0.044
Table 3
Distance measure from the negative ideal solution.
$ {D^{-}}({A_{i}})$ Case 1 Case 2 Case 3
$ {D_{1}^{-}}$ 0.180 0.164 0.198
$ {D_{2}^{-}}$ 0.176 0.183 0.173
$ {D_{3}^{-}}$ 0.120 0.115 0.102
$ {D_{4}^{-}}$ 0.181 0.182 0.180
Table 4
Relative closeness coefficient.
$ RC({A_{i}})$ Case 1 Case 2 Case 3
$ RC({A_{1}})$ 0.461 0.438 0.496
$ RC({A_{2}})$ 0.826 0.839 0.823
$ RC({A_{3}})$ 0.462 0.451 0.408
$ RC({A_{4}})$ 0.796 0.778 0.800
Table 5
Intervalneutrosophic hesitant fuzzy decision matrix.
$ {C_{1}}$ $ {C_{2}}$ $ {C_{3}}$
$ {A_{1}}$ $ \left\{\begin{array}{c}\{[0.3,0.4],[0.4,0.5]\}\\ {} \{[0.1,0.2]\}\\ {} \{[0.3,0.4]\}\end{array}\right\}$ $ \left\{\begin{array}{c}\{[0.4,0.5],[0.5,0.6]\}\\ {} \{[0.2,0.3]\}\\ {} \{[0.3,0.3],[0.3,0.4]\}\end{array}\right\}$ $ \left\{\begin{array}{c}\{[0.3,0.5]\}\\ {} \{[0.2,0.3]\}\\ {} \{[0.1,0.2],[0.3,0.3]\}\end{array}\right\}$
$ {A_{2}}$ $ \left\{\begin{array}{c}\{[0.6,0.7]\}\\ {} \{[0.1,0.2]\}\\ {} \{[0.1,0.2],[0.2,0.3]\}\end{array}\right\}$ $ \left\{\begin{array}{c}\{[0.6,0.7]\}\\ {} \{[0.1,0.2]\}\\ {} \{[0.2,0.3]\}\end{array}\right\}$ $ \left\{\begin{array}{c}\{[0.6,0.7]\}\\ {} \{[0.1,0.2]\}\\ {} \{[0.1,0.2]\}\end{array}\right\}$
$ {A_{3}}$ $ \left\{\begin{array}{c}\{[0.3,0.4],[0.5,0.6]\}\\ {} \{[0.2,0.4]\}\\ {} \{[0.2,0.3]\}\end{array}\right\}$ $ \left\{\begin{array}{c}\{[0.6,0.7]\}\\ {} \{[0.0,0.1]\}\\ {} \{[0.2,0.3]\}\end{array}\right\}$ $ \left\{\begin{array}{c}\{[0.5,06]\}\\ {} \{[0.1,0.2],[0.2,0.3]\}\\ {} \{[0.2,0.3]\}\end{array}\right\}$
$ {A_{4}}$ $ \left\{\begin{array}{c}\{[0.7,0.8]\}\\ {} \{[0.0,0.1]\}\\ {} \{[0.1,0.2]\}\end{array}\right\}$ $ \left\{\begin{array}{c}\{[0.5,0.6]\}\\ {} \{[0.2,0.3]\}\\ {} \{[0.3,0.4]\}\end{array}\right\}$ $ \left\{\begin{array}{c}\{[0.2,0.3]\}\\ {} \{[0.1,0.2]\}\\ {} \{[0.4,0.5],[0.5,0.6]\}\end{array}\right\}$
Table 6
Distance measure from the positive ideal solution.
$ {\tilde{D}^{+}}({A_{i}})$ Case 1 Case 2 Case 3
$ {\tilde{D}_{1}^{+}}$ 0.164 0.168 0.167
$ {\tilde{D}_{2}^{+}}$ 0.032 0.035 0.037
$ {\tilde{D}_{3}^{+}}$ 0.102 0.113 0.104
$ {\tilde{D}_{4}^{+}}$ 0.146 0.139 0.129
Table 7
Distance measure from the negative ideal solution.
$ {\tilde{D}^{-}}({A_{i}})$ Case 1 Case 2 Case 3
$ {\tilde{D}_{1}^{-}}$ 0.078 0.079 0.063
$ {\tilde{D}_{2}^{-}}$ 0.179 0.180 0.168
$ {\tilde{D}_{3}^{-}}$ 0.155 0.153 0.148
$ {\tilde{D}_{4}^{-}}$ 0.071 0.080 0.082
Table 8
Relative closeness coefficient.
$ \tilde{RC({A_{i}})}$ Case 1 Case 2 Case 3
$ \tilde{RC({A_{1}})}$ 0.322 0.312 0.273
$ \tilde{RC({A_{2}})}$ 0.848 0.837 0.819
$ \tilde{RC({A_{3}})}$ 0.603 0.576 0.587
$ \tilde{RC({A_{4}})}$ 0.327 0.365 0.389
Table 9
A comparison of the results under SVNHFS.
Methods Type of weight information Ranking result
Ye ’s (2015a) method Completely known $ {A_{2}}\succ {A_{4}}\succ {A_{3}}\succ {A_{1}}$
Sahin and Liu ’s (2017) method Completely known $ {A_{2}}\succ {A_{4}}\succ {A_{3}}\succ {A_{1}}$
Proposed method Completely known $ {A_{2}}\succ {A_{4}}\succ {A_{3}}\succ {A_{1}}$
Ye ’s (2015b) method Partially known Not applicable
Sahin and Liu ’s (2017) method Partially known Not applicable
Proposed method Partially known $ {A_{2}}\succ {A_{4}}\succ {A_{3}}\succ {A_{1}}$
Ye ’s (2015b) method Completely unknown Not applicable
Sahin and Liu ’s (2017) method Completely unknown Not applicable
Proposed method Completely unknown $ {A_{2}}\succ {A_{4}}\succ {A_{1}}\succ {A_{3}}$
Table 10
A comparison of the results under INHFS.
Methods Type of weight information Ranking result
Liu and Shi’s method (2015) Completely known $ {A_{2}}\succ {A_{3}}\succ {A_{4}}\succ {A_{1}}$
Proposed method Completely known $ {A_{2}}\succ {A_{3}}\succ {A_{4}}\succ {A_{1}}$
Liu and Shi’s method (2015) Partially known Not applicable
Proposed method Partially known $ {A_{2}}\succ {A_{3}}\succ {A_{4}}\succ {A_{1}}$
Liu and Shi’s method (2015) Completely unknown Not applicable
Proposed method Completely unknown $ {A_{2}}\succ {A_{3}}\succ {A_{4}}\succ {A_{1}}$

INFORMATICA

  • Online ISSN: 1822-8844
  • Print ISSN: 0868-4952
  • Copyright © 2023 Vilnius University

About

  • About journal

For contributors

  • OA Policy
  • Submit your article
  • Instructions for Referees
    •  

    •  

Contact us

  • Institute of Data Science and Digital Technologies
  • Vilnius University

    Akademijos St. 4

    08412 Vilnius, Lithuania

    Phone: (+370 5) 2109 338

    E-mail: informatica@mii.vu.lt

    https://informatica.vu.lt/journal/INFORMATICA
Powered by PubliMill  •  Privacy policy