Informatica logo


Login Register

  1. Home
  2. Issues
  3. Volume 29, Issue 4 (2018)
  4. Some Maclaurin Symmetric Mean Operators ...

Informatica

Information Submit your article For Referees Help ATTENTION!
  • Article info
  • Full article
  • Related articles
  • Cited by
  • More
    Article info Full article Related articles Cited by

Some Maclaurin Symmetric Mean Operators Based on Neutrosophic Linguistic Numbers for Multi-Attribute Group Decision Making
Volume 29, Issue 4 (2018), pp. 711–732
Peide Liu   Xinli You  

Authors

 
Placeholder
https://doi.org/10.15388/Informatica.2018.189
Pub. online: 1 January 2018      Type: Research Article      Open accessOpen Access

Received
1 September 2017
Accepted
1 April 2018
Published
1 January 2018

Abstract

Neutrosophic linguistic numbers (NLNs) can depict the uncertain and imperfect information by linguistic variables (LVs). As the classical aggregation operator, the Maclaurin symmetric mean (MSM) operator has its prominent characteristic that reflects the interactions among multiple attributes. Considering such circumstance: there are interrelationship among the attributes which take the forms of NLNs and the attribute weights are fully unknown in multiple attribute group decision making (MAGDM) problems, we propose a novel MAGDM methods with NLNs. Firstly, the MSM is extended to NLNs, that is, aggregating neutrosophic linguistic information by two new operators – the NLN Maclaurin symmetric mean (NLNMSM) operator and the weighted NLN Maclaurin symmetric mean (WNLNMSM) operator. Then, we discuss some characteristics and detail some special examples of the developed operators. Further, we develop an information entropy measure under NLNs to assign the objective weights of the attributes. Based on the entropy weights and the proposed operators, an approach to MAGDM problems with NLNs is introduced. Finally, a manufacturing industry example is given to demonstrate the effectiveness and superiority of the proposed method.

1 Introduction

Multi-attribute decision making (MADM) or MAGDM has drawn great attention and been widely applied in various industries (Mulliner et al., 2015; Ou, 2016; Stanujkic et al., 2017; Zavadskas et al., 2017). So how to utilize an effective method to depict the attributes is an important step of the decision process. In real-world decision process, the decision reality is uncertain and human cognition is fuzzy and ambiguous. Therefore, decision makers (DMs) may actually prefer the linguistic terms (LTs) to express the qualitative evaluation, such as “good”, “better”, “bad” and so on. Zadeh (1975) firstly presented the notion of LVs. Since their appearance, the research about the MADM or MAGDM problems based on the LVs have received great efforts (Guan et al., 2017; Meng, 2017; Morente-Molinera et al., 2017). Herrera and Herrera-Viedma (2000) built a linguistic model to cope with MAGDM problems. Cabrerizo et al. (2014) proposed utilizing the information granularity to reach consensus and solve MAGDM problems. Herrera and Herrera-Viedma (1996) presented the linguistic ordered weighted averaging (LOWA) operators. Xu (2006a) developed a linguistic hybrid arithmetic average operator to aggregate LVs. Further, some new ideas such as uncertain linguistic variables (ULVs) (Xu, 2004), 2-tuple linguistic information (Herrera and Martínez, 2000; Li et al., 2017) and its extensive form (Ju et al., 2014) were raised by some scholars to deal with MAGDM problems.
However, there is such a case in real life: during voting process, 30 percent of electors voted in favour, 20 percent of electors voted against, 10 percent of people abstained and 40 percent of people were neutral or absent or in other uncertain situations. In order to express this uncertainty, inconsistent and imperfect information better, Smarandache (1998) presented the neutrosophic set (NS), which is a common conceptual framework that expands the fuzzy set and the intuitionistic fuzzy set. On the other hand, Smarandache (1998, 2013, 2014) introduced a new definition of neutrosophic number (NN), denoted by $A=u+vI$, where u represents a determinate part and $vI$ represents an indeterminate part. When there is no indeterminacy related to A ($A=u$), it will be the best situation, otherwise when $A=vI$, it will be the worst situation. As we have known, the decision-making methods based on NS (Liu and Shi, 2015; Peng et al., 2014; Ye, 2014), including its subsets: simplified NSs (SNSs), single-valued NSs (SVNSs) and interval NSs (INSs), cannot solve such problems under NN environment. Because the NN and the NS are two different subclasses of neutrosophy (Smarandache, 1998), and they take different forms to express information. Now, the NSs have been applied in various fields (Liu and Shi, 2015; Peng et al., 2014), clustering analysis (Ye, 2014), medical diagnosis (Ye, 2016a) etc., while little research has been done in dealing with indeterminate problems by NNs. To further develop the application about NNs, Kong et al. (2015) defined the cosine similarity measure between NNs. Ye (2016b) developed a possibility degree ranking method under NNs environment. Ye (2017) also proposed a new method which utilizes bidirectional projection model to handle MAGDM problems with NNs.
Because DMs have vague recognition for the complex objective things, linguistic evaluation may more easily express the fuzzy information than crisp numbers or fuzzy numbers. However, the LVs cannot depict uncertain, inconsistent and imperfect information, therefore Smarandache (2015) presented a new notion of neutrosophic linguistic numbers (NLNs) by combining the LVs and NNs. Then, Ye (2016c) put forward the operational laws of NLNs and presented the corresponding operators – the NLN weighted arithmetic average (NLNWAA) and NLN weighted geometric average (NLNWGA) operators.
Usually, the evaluation information given by DMs is fuzzy, uncertain and imperfect. But so far, there are few studies on the NLNs to handle uncertain and fuzzy problems. For this, we develop a new method for MAGDM problems under neutrosophic linguistic environment. The information aggregation operators perform well in information fusion and have received increasing concerns. Furthermore, the different aggregation operators have its distinctive characteristics, such as PA operator, it can assign the weight based on the support degree between integrated arguments. In addition, some aggregation operators can reflect the interactions between integrated arguments, such as Bonferroni mean (BM) (Liu et al., 2017a), Heronian mean (HM) (Liu and Chen, 2017) and Maclaurin symmetric mean (MSM) (Qin, 2017). Xu and Yager (2011) make a comparison between the MSM and the BM, the unique advantage of the MSM is that it can reflect the interactions among the multiple attributes (of course, it can include two attributes), while the BM or HM can only consider the interaction between two attributes. Therefore, the MSM demonstrates more flexibility and robustness in the process of information integration than BM and HM. Now Qin (2017) extended MSM to pythagorean fuzzy numbers; Yu et al. (2017) extended MSM to hesitant fuzzy linguistic numbers.
Obviously, DMs have limited judgements due to the fuzziness and complicacy of practical MAGDM problems. To select an optimal alternative in a decision problem, we need to simultaneously consider the following requirements: (1) There exists unreasonable weight vector of attribute values given by DMs because of their own bias or limitation on knowledge (Liu et al., 2017b). In order to obtain objective weight and relieve unreasonable influences, we can use an objective entropy model to get the attributes weights. (2) In real decision problems, the interrelationship between attributes is common, and then the BM or MSM can realize the function. Based on above analysis, we have known the MSM operator has the advantage over BM because the former considers the interrelationships among multiple attributes whereas the latter can only reflect the interrelationship between two attributes. Therefore, we can extend MSM to NLNs.
So, we concentrate on applying the MSM to accommodate neutrosophic linguistic environment and develop a novel MAGDM method with NLNs in the paper. And the purposes of this paper are: (1) to establish a weight model utilizing entropy measurement under NLNs. (2) to propose the weighted neutrosophic linguistic MSM operators (WNLNMSM) and to investigate their characteristics, and (3) to develop a new MAGDM method based on the WNLNMSM operators under neutrosophic linguistic environment. The main contributions of the proposed method are that it is able to depict uncertain and imperfect linguistic information in qualitative decision environments and can consider the interrelationships among the attributes.
The rest of this paper is organized as follows. In Section 2, we take a brief look at some basic concepts, including LVs, NNs, NLNs and the MSM operator. In Section 3, we develop the NLNMSM and WNLNMSM operators, then we investigate some characteristics and detail some special examples. In Section 4, we develop a method of determining objective entropy weights. In Section 5, we detail a novel method based on WNLNMSM operator for the MAGDM problems that use NLNs to describe evaluation values. In Section 6, a practical example of manufacturing is demonstrated to show the practicality, effectiveness and advantages of the proposed approach. In Section 7, we summarize this paper.

2 Preliminaries

2.1 The Linguistic Variables

Zadeh (1975) firstly came up with the LV to describe the linguistic information, and the definition is as follows:
(1)
\[ {S_{[0,\tau ]}}=\{{s_{0}},{s_{1}},{s_{2}},\dots ,{s_{\tau -1}}\},\]
where τ is the odd value, such as 3, 5, 7, 9, etc. Given an example as $\tau =5$, a set ${S_{[0,4]}}$ is shown as follows: ${S_{[0,4]}}=\{{s_{0}}=\mathrm{worse},{s_{1}}=\mathrm{poor},{s_{2}}=\mathrm{good},{s_{3}}=\mathrm{better},{s_{4}}=\mathrm{excellent}\}$.
In general, the LVs can meet:
  • (1) ${S_{m}}\succ {S_{n}}$, if $m>n$;
  • (2) The negation operator is: $\mathit{neg}({S_{m}})={S_{n}}$ where $m+n=\tau -1$.
There exists a loss of information if the form of LVs is discrete, so Xu (2006a) proposed a continuous LV:
(2)
\[ \overline{S}=\big\{{S_{\alpha }}\hspace{0.1667em}|\hspace{0.1667em}\alpha \in {R^{+}}\big\}.\]
Then Xu (2006a, 2006b) put forward the operational rules of LVs as follows:
(3)
\[ (1)\hspace{2.5pt}\lambda {S_{m}}={S_{\lambda \times m}},\]
(4)
\[ (2)\hspace{2.5pt}{S_{m}}+{S_{n}}={S_{m+n}},\]
(5)
\[ (3)\hspace{2.5pt}{S_{m}}\times {S_{n}}={S_{m\times n}},\]
(6)
\[ (4)\hspace{2.5pt}{S_{m}}/{S_{n}}={S_{m/n}}\hspace{1em}(n\ne 0),\]
(7)
\[ (5)\hspace{2.5pt}{({S_{m}})^{\lambda }}={S_{{m^{\lambda }}}},\hspace{1em}\lambda \geqslant 0.\]

2.2 Neutrosophic Numbers and Neutrosophic Linguistic Numbers

Smarandache (1998, 2013, 2014) proposed NN, and the form is $A=u+vI$, where $u,v\in R$, I depicts indeterminacy satisfying ${I^{n}}=I$ if $n\succ 0$, and $0\times I=0$, and there is no relevant definition for $bI/nI$.
A NN can be graphically shown in Fig. 1.
info1194_g001.jpg
Fig. 1
A neutrosophic number.
For example, suppose that there is a NN $A=4+3I$. If $I\in [0.4,0.6]$, it is equivalent to $A\in [5.2,5.8]$, and it is certain for $A\geqslant 5$. We can see that its determinate part is 4 and its indeterminate part is $3I$. When the indeterminacy is $I\in [0.4,0.6]$, the possibility for “A” is within the interval $[5.2,5.8]$.
Definition 1 (See Smarandache, 1998, 2013, 2014).
Let ${A_{1}}={u_{1}}+{v_{1}}I$ and ${A_{2}}={u_{2}}+{v_{2}}I$ be two NNs for real n ${u_{1}},{v_{1}},{u_{2}},{v_{2}}\in R$. The operational rules for ${A_{1}}$ and ${A_{2}}$ are shown as follows:
(8)
\[ (1)\hspace{2.5pt}{A_{1}}+{A_{2}}={u_{1}}+{u_{2}}+({v_{1}}+{v_{2}})I,\]
(9)
\[ (2)\hspace{2.5pt}{A_{1}}-{A_{2}}={u_{1}}-{u_{2}}+({v_{1}}-{v_{2}})I,\]
(10)
\[ (3)\hspace{2.5pt}{A_{1}}\times {A_{2}}={u_{1}}{u_{2}}+({u_{1}}{v_{2}}+{v_{1}}{u_{2}}+{v_{1}}{v_{2}})I,\]
(11)
\[ (4)\hspace{2.5pt}{A_{1}^{2}}={({u_{1}}+{v_{1}}I)^{2}}={u_{1}^{2}}+({({u_{1}}+{v_{1}})^{2}}-{u_{1}^{2}})I,\]
(12)
\[ (5)\hspace{2.5pt}\frac{{A_{1}}}{{A_{2}}}=\frac{{u_{1}}+{v_{1}}I}{{u_{2}}+{v_{2}}I}=\frac{{u_{1}}}{{u_{2}}}+\frac{{u_{2}}{v_{1}}-{u_{1}}{v_{2}}}{{u_{2}}({u_{2}}+{v_{2}})}I\hspace{1em}\text{for}\hspace{2.5pt}{u_{2}}\ne 0\hspace{2.5pt}\text{and}\hspace{2.5pt}{u_{2}}\ne {v_{2}}.\]
Let $A=u+vI$ be a NN. We called A is the positive NN when $u,v\geqslant 0$, and we would assume all NNs are positive, unless they are stated.
Definition 2.
Let $A=u+vI$ be a NN and $I\in [{I^{L}},{I^{H}}]$. Then, we define the expected value of the NN A as
(13)
\[ EX(A)=(u+v\times {I^{l}})+(u+v\times {I^{u}}).\]
Based on Definition 2, we can compare two NNs by the expected value of the NNs. For example, let $A=2+4I$ and $B=3+2I$ be two NNs for $I\in [0.20.4]$, the expected values of the NNs A and B are $EX(A)=(2+4\times 0.2)+(2+4\times 0.4)=6.4$, $EX(B)=(3+2\times 0.2)+(3+2\times 0.4)=7.2$, so we can get $A\prec B$ because of $EX(A)<EX(B)$. That is, $\min (A,B)=A$, $\max (A,B)=B$.
To better depict the uncertain and incomplete information Smarandache (2015) presented the concept of NLN, which is denoted as ${s_{u+vI}}$, where $u+vI$ is NN. Then Ye (2016c) gave the operational laws and the expected value of NLNs.
Definition 3 (See Ye, 2016c).
Assume that ${\tilde{s}_{1}}={s_{{u_{1}}+{v_{1}}I}}$ and ${\tilde{s}_{2}}={s_{{u_{2}}+{v_{2}}I}}$ are two NLNs, then the operational rules are defined as follows:
(14)
\[ (1)\hspace{2.5pt}{\tilde{s}_{1}}+{\tilde{s}_{2}}={s_{{u_{1}}+{u_{2}}+({v_{1}}+{v_{2}})I}},\]
(15)
\[ (2)\hspace{2.5pt}{\tilde{s}_{1}}-{\tilde{s}_{2}}={s_{{u_{1}}-{u_{2}}+({v_{1}}-{v_{2}})I}},\]
(16)
\[ (3)\hspace{2.5pt}{\tilde{s}_{1}}\times {\tilde{s}_{2}}={s_{{u_{1}}{u_{2}}+({u_{1}}{v_{2}}+{u_{2}}{v_{1}}+{v_{1}}{v_{2}})I}},\hspace{1em}\lambda \geqslant 0,\]
(17)
\[ (4)\hspace{2.5pt}\frac{{\tilde{s}_{1}}}{{\tilde{s}_{2}}}={s_{\frac{{u_{1}}}{{u_{2}}}+\frac{{u_{2}}{v_{1}}-{u_{1}}{v_{2}}}{{u_{2}}({u_{2}}+{v_{2}})}I}},\]
(18)
\[ (5)\hspace{2.5pt}\lambda {\tilde{s}_{1}}={s_{\lambda {u_{1}}+\lambda {v_{1}}I}},\hspace{1em}\lambda \geqslant 0,\]
(19)
\[ (6)\hspace{2.5pt}{\tilde{s}_{1}^{\lambda }}={s_{{u_{1}^{\lambda }}+[{({u_{1}}+{v_{1}})^{\lambda }}-{u_{1}^{\lambda }}]I}},\hspace{1em}\lambda \geqslant 0.\]
Obviously, the above results are still NLNs.
Definition 4 (See Ye, 2016c).
Let $S=\{{s_{0}},{s_{1}},\dots ,{s_{t-1}}\}$ be a finitely linguistic term set (LTS) and $\tilde{s}={s_{u+vI}}$ be an NLN for S and $I\in [{I^{L}},{I^{H}}]$. Then, the expected value of the NLN $\tilde{s}$ is defined as
(20)
\[ EX(\tilde{s})=\frac{(u+v\times {I^{L}})+(u+v\times {I^{H}})}{2(t-1)}.\]
Based on Definition 4, the bigger the value of $EX(\tilde{s})$ is, the greater the NLN $\tilde{s}$ is. So, a comparison method for NLNs is defined below.
Definition 5 (See Ye, 2016c).
Let ${\tilde{s}_{1}}$ and ${\tilde{s}_{2}}$ be two NLNs. Then,
  • 1. If $EX({\tilde{s}_{1}})\succ EX({\tilde{s}_{2}})$, then ${\tilde{s}_{1}}\succ {\tilde{s}_{2}}$;
  • 2. If $EX({\tilde{s}_{1}})=EX({\tilde{s}_{2}})$, then ${\tilde{s}_{1}}={\tilde{s}_{2}}$.
Example 1.
Let ${\tilde{s}_{1}}={s_{2+3I}}$ and ${\tilde{s}_{2}}={s_{4+I}}$ be two NLNs, where $I\in [0.1,0.3]$ , $t=6$. Then, we have$EX({\tilde{s}_{1}})=0.52\prec EX({\tilde{s}_{2}})=0.84$ according to Eq. (1), so the ranking order is ${\tilde{s}_{1}}\prec {\tilde{s}_{2}}$.
To better depict and compare the differences and correlation between NLNs and NNs, we briefly summarize the advantages and disadvantages of NLNs and NNs in Table 1.
Table 1
The advantages and disadvantages of NLNs and NNs.
Advantages Disadvantages
NNs The NNs can express uncertain, inconsistent, and imperfect information by a determinate part u and an indeterminate part $vI$; The indeterminate degree I can be assigned by DMs according to their preference or real requirements. It is difficult to express the complex determinate part, such that during a voting process, the results of voting include determinate part: five votes in favour and two votes against, indeterminate part: one absent vote.
NLNs The NLNs combine LVs and NNs so that they can more easily depict incompleteness, indeterminacy, and inconsistency than crisp numbers or fuzzy numbers; The indeterminate degree I can be assigned by DMs. The indeterminate part cannot distinguish the falsity-membership degree.

2.3 MSM Operator

The MSM originally presented by Maclaurin (1729), which is a well-known and useful mean type operator which can reflect the interactions among multiple attributes.
Definition 6 (See Maclaurin, 1729).
Suppose ${z_{i}}$ $(i=1,2,\dots ,n)$ be a set of non-negative real numbers. The MSM is defined as
(21)
\[ {\mathit{MSM}^{(k)}}({z_{1}},{z_{2}},\dots ,{z_{n}})={\bigg(\frac{{\textstyle\sum _{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}}{\textstyle\textstyle\prod _{j=1}^{k}}{z_{{i_{j}}}}}{{C_{n}^{k}}}\bigg)^{1/k}},\]
where k is a parameter, $k=1,2,\dots ,n$ and ${i_{1}},{i_{2}},\dots $ , and ${i_{k}}$ are k integer values taken from the set $\{1,2,\dots ,n\}$ of n integer values, where $1\leqslant {i_{1}}<{i_{2}}<\cdots <{i_{k}}\leqslant n$, ${C_{k}^{n}}$ denotes the binomial coefficient and ${C_{k}^{n}}=\frac{n!}{k!(n-k)!}$.
Remarkably, the MSM has the following characteristics:
  • (1) ${\mathit{MSM}^{(k)}}(0,0,\cdots \hspace{0.1667em},0)=0$, ${\mathit{MSM}^{(k)}}(z,z,\dots ,z)=z$;
  • (2) ${\mathit{MSM}^{(k)}}({z_{1}},{z_{2}},\dots ,{z_{k}})\leqslant {\mathit{MSM}^{(k)}}({y_{1}},{y_{2}},\dots ,{y_{k}})$, if ${x_{i}}\leqslant {y_{i}}$ for all i;
  • (3) $\min \{{z_{i}}\}\leqslant {\mathit{MSM}^{(k)}}({z_{1}},{z_{2}},\dots ,{z_{k}})\leqslant \max \{{z_{i}}\}$.

3 Neutrosophic Linguistic MSM Aggregation Operators

In this section, we develop the NLNMSM operators and WNLNMSM operators, and then we will research their characteristics and some special examples.

3.1 NLNMSM Operator

Definition 7.
Let ${\tilde{s}_{1}},{\tilde{s}_{2}},\dots $ , and ${\tilde{s}_{n}}$ be NLNs, the NLNMSM operator of NLNs ${\tilde{s}_{1}},{\tilde{s}_{2}},\dots $ , and ${\tilde{s}_{n}}$ is defined as follows:
(22)
\[ {\mathit{NLNMSM}^{(k)}}({\tilde{s}_{1}},{\tilde{s}_{2}},\dots ,{\tilde{s}_{n}})={\bigg(\frac{{\textstyle\sum _{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}}{\textstyle\textstyle\prod _{j=1}^{k}}{\tilde{s}_{{i_{j}}}}}{{C_{n}^{k}}}\bigg)^{\frac{1}{k}}}.\]
Theorem 1.
Let ${\tilde{s}_{1}},{\tilde{s}_{2}},\dots $ , and ${\tilde{s}_{n}}$ be NLNs, where ${\tilde{s}_{i}}={s_{{u_{i}}+{v_{i}}I}}$ $(i=1,2,\dots ,n)$, then the aggregated result of NLNs ${\tilde{s}_{1}},{\tilde{s}_{2}},\dots $ , and ${\tilde{s}_{n}}$ can be denoted as
(23)
\[\begin{array}{l}\displaystyle {\mathit{NLNMSM}^{(k)}}({\tilde{s}_{1}},{\tilde{s}_{2}},\dots ,{\tilde{s}_{n}})\\ {} \displaystyle \hspace{1em}={s_{{\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{k}}{u_{{i_{j}}}}}{{C_{n}^{k}}}\Big)^{\frac{1}{k}}}+\Big({\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{k}}({u_{{i_{j}}}}+{v_{{i_{j}}}})}{{C_{n}^{k}}}\Big)^{\frac{1}{k}}}-{\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{k}}{u_{{i_{j}}}}}{{C_{n}^{k}}}\Big)^{\frac{1}{k}}}\Big)I}}.\end{array}\]
Proof.
On the basis of the Eqs. (14), (16), (18), (19), we have
\[ {\prod \limits_{j=1}^{k}}{\tilde{s}_{{i_{j}}}}={s_{{\textstyle\textstyle\prod _{j=1}^{k}}{u_{{i_{j}}}}+\Big({\textstyle\textstyle\prod _{j=1}^{k}}({u_{{i_{j}}}}+{v_{{i_{j}}}})-{\textstyle\textstyle\prod _{j=1}^{k}}{u_{{i_{j}}}}\Big)I}}\]
and
\[\begin{array}{l}\displaystyle \sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\prod \limits_{j=1}^{k}}{\tilde{s}_{{i_{j}}}}\\ {} \displaystyle \hspace{1em}={s_{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{k}}{u_{{i_{j}}}}+\Big(\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{k}}({u_{{i_{j}}}}+{v_{{i_{j}}}})-\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{k}}{u_{{i_{j}}}}\Big)I}}.\end{array}\]
Then we obtain
\[\begin{array}{l}\displaystyle \frac{1}{{C_{n}^{k}}}\bigg(\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\prod \limits_{j=1}^{k}}{\tilde{s}_{{i_{j}}}}\bigg)\\ {} \displaystyle \hspace{1em}={s_{\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{k}}{u_{{i_{j}}}}}{{C_{n}^{k}}}+\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{k}}({u_{{i_{j}}}}+{v_{{i_{j}}}})-\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{k}}{u_{{i_{j}}}}}{{C_{n}^{k}}}\Big)I}},\end{array}\]
\[\begin{array}{l}\displaystyle {\bigg(\frac{{\textstyle\sum _{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}}{\textstyle\textstyle\prod _{j=1}^{k}}{\tilde{s}_{{i_{j}}}}}{{C_{n}^{k}}}\bigg)^{\frac{1}{k}}}\\ {} \displaystyle \hspace{1em}={s_{{\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{k}}{u_{{i_{j}}}}}{{C_{n}^{k}}}\Big)^{\frac{1}{k}}}+\Big({\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{k}}({u_{{i_{j}}}}+{v_{{i_{j}}}})}{{C_{n}^{k}}}\Big)^{\frac{1}{k}}}-{\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{k}}{u_{{i_{j}}}}}{{C_{n}^{k}}}\Big)^{\frac{1}{k}}}\Big)I}}.\end{array}\]
Therefore,
\[\begin{array}{l}\displaystyle {\mathit{NLNMSM}^{(k)}}({\tilde{s}_{1}},{\tilde{s}_{2}},\dots ,{\tilde{s}_{n}})\\ {} \displaystyle \hspace{1em}={s_{{\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{k}}{u_{{i_{j}}}}}{{C_{n}^{k}}}\Big)^{\frac{1}{k}}}+\Big({\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{k}}({u_{{i_{j}}}}+{v_{{i_{j}}}})}{{C_{n}^{k}}}\Big)^{\frac{1}{k}}}-{\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{k}}{u_{{i_{j}}}}}{{C_{n}^{k}}}\Big)^{\frac{1}{k}}})I}}.\end{array}\]
Next, we investigate the desirable properties of NLNMSM.  □
Property 1 (Idempotency).
If ${\tilde{s}_{i}}=\tilde{s}={s_{u+vI}}$ $(i=1,2,\dots ,n)$ all are equal, then
(24)
\[ {\mathit{NLNMSM}^{(k)}}(\tilde{s},\tilde{s},\dots ,\tilde{s})={s_{u+vI}}.\]
Proof.
Since $\tilde{s}={s_{t+vI}}$, based on Theorem 1, we have
\[\begin{array}{l}\displaystyle {\mathit{NLNMSM}^{(k)}}(\tilde{s},\tilde{s},\dots ,\tilde{s})\\ {} \displaystyle \hspace{1em}={s_{{\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{k}}u}{{C_{n}^{k}}}\Big)^{\frac{1}{k}}}+\Big({\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{k}}(u+v)}{{C_{n}^{k}}}\Big)^{\frac{1}{k}}}-{\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{k}}u}{{C_{n}^{k}}}\Big)^{\frac{1}{k}}}\Big)I}}\\ {} \displaystyle \hspace{1em}={s_{{({u^{k}})^{\frac{1}{k}}}+({(u+v)^{k\times \frac{1}{k}}}-{u^{k\times \frac{1}{k}}})I}}\\ {} \displaystyle \hspace{1em}={s_{u+vI}}.\end{array}\]
 □
Property 2 (Commutativity).
Let ${\tilde{s}_{1}},{\tilde{s}_{2}},\dots $ , and ${\tilde{s}_{n}}$ be NLNs, and ${\tilde{s}^{\prime }_{1}},{\tilde{s}^{\prime }_{2}},\dots $ , and ${\tilde{s}^{\prime }_{n}}$ is any permutation of ${\tilde{s}_{1}},{\tilde{s}_{2}},\dots $ , and ${\tilde{s}_{n}}$, then
(25)
\[ {\mathit{NLNMSM}^{(k)}}({\tilde{s}^{\prime }_{1}},{\tilde{s}^{\prime }_{2}},\dots ,{\tilde{s}^{\prime }_{n}})={\mathit{NLNMSM}^{(k)}}({\tilde{s}_{1}},{\tilde{s}_{2}},\dots ,{\tilde{s}_{n}}).\]
Proof.
Based on Definition 7, the conclusion is obvious
\[\begin{array}{l}\displaystyle {\mathit{NLNMSM}^{(k)}}({\tilde{s}^{\prime }_{1}},{\tilde{s}^{\prime }_{2}},\dots ,{\tilde{s}^{\prime }_{n}})\\ {} \displaystyle \hspace{1em}={\bigg(\frac{{\textstyle\sum _{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}}{\textstyle\textstyle\prod _{j=1}^{k}}{\tilde{s}^{\prime }_{{i_{j}}}}}{{C_{n}^{k}}}\bigg)^{\frac{1}{k}}}={\bigg(\frac{{\textstyle\sum _{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}}{\textstyle\textstyle\prod _{j=1}^{k}}{\tilde{s}_{{i_{j}}}}}{{C_{n}^{k}}}\bigg)^{\frac{1}{k}}}\\ {} \displaystyle \hspace{1em}={\mathit{NLNMSM}^{(k)}}({\tilde{s}_{1}},{\tilde{s}_{2}},\dots ,{\tilde{s}_{n}}).\end{array}\]
 □
Property 3 (Monotonicity).
Let ${\tilde{s}_{1}},{\tilde{s}_{2}},\dots $ , and ${\tilde{s}_{n}}$ be NLNs, where ${\tilde{s}_{i}}={s_{{u_{i}}+{v_{i}}I}}$, and $i=1,2,\dots $ , n, and let ${\tilde{g}_{1}},{\tilde{g}_{2}},\dots $ , and ${\tilde{g}_{n}}$ be NLNs, where ${\tilde{g}_{i}}={s_{{t_{i}}+{f_{i}}I}}$ and $i=1,2,\dots ,n$, which meet the condition ${u_{i}}\leqslant {t_{i}}$, ${v_{i}}\leqslant {f_{i}}$ for all i, then
(26)
\[ {\mathit{NLNMSM}^{(k)}}\big({\tilde{s}_{{\alpha _{1}}}},{\tilde{s}_{{\alpha _{2}}}},\dots ,{\tilde{s}_{{\alpha _{n}}}}\big)\leqslant {\mathit{NLNMSM}^{(k)}}\big({\tilde{s}_{{\beta _{1}}}},{\tilde{s}_{{\beta _{2}}}},\dots ,{\tilde{s}_{{\beta _{n}}}}\big).\]
Proof.
Since ${u_{i}}\leqslant {t_{i}}$, then
\[ {\bigg(\frac{{\textstyle\sum _{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}}{\textstyle\textstyle\prod _{j=1}^{k}}{u_{{i_{j}}}}}{{C_{n}^{k}}}\bigg)^{\frac{1}{k}}}\leqslant {\bigg(\frac{{\textstyle\sum _{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}}{\textstyle\textstyle\prod _{j=1}^{k}}{t_{{i_{j}}}}}{{C_{n}^{k}}}\bigg)^{\frac{1}{k}}}.\]
Since ${u_{i}}\leqslant {t_{i}}$, ${v_{i}}\leqslant {f_{i}}$, then ${u_{i}}+{v_{i}}\leqslant {t_{i}}+{f_{i}}$
\[ {\bigg(\frac{{\textstyle\sum _{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}}{\textstyle\textstyle\prod _{j=1}^{k}}({u_{{i_{j}}}}+{v_{{i_{j}}}})}{{C_{n}^{k}}}\bigg)^{\frac{1}{k}}}\leqslant {\bigg(\frac{{\textstyle\sum _{1\leqslant {i_{1}}<....<{i_{k}}\leqslant n}}{\textstyle\textstyle\prod _{j=1}^{k}}({t_{{i_{j}}}}+{f_{{i_{j}}}})}{{C_{n}^{k}}}\bigg)^{\frac{1}{k}}}.\]
And according to the Definition 3 and Definition 4, we can get $E({\tilde{s}_{\alpha }})\leqslant E({\tilde{s}_{\beta }})$, so, finally, we have ${\mathit{NLNMSM}^{(k)}}({\tilde{s}_{{\alpha _{1}}}},{\tilde{s}_{{\alpha _{2}}}},\dots ,{\tilde{s}_{{\alpha _{n}}}})\leqslant {\mathit{NLNMSM}^{(k)}}({\tilde{s}_{{\beta _{1}}}},{\tilde{s}_{{\beta _{2}}}},\dots ,{\tilde{s}_{{\beta _{n}}}})$.  □
Property 4 (Boundedness).
Suppose ${\tilde{s}^{-}}=\min ({\tilde{s}_{1}},{\tilde{s}_{2}},\dots ,{\tilde{s}_{n}})$, ${\tilde{s}^{+}}=\max ({\tilde{s}_{1}},{\tilde{s}_{2}},\dots ,{\tilde{s}_{n}})$ then
(27)
\[ {\tilde{s}^{-}}\leqslant {\mathit{NLNMSM}^{(k)}}({\tilde{s}_{1}},{\tilde{s}_{2}},\dots ,{\tilde{s}_{n}})\leqslant {\tilde{s}^{+}}.\]
Proof.
Based on Properties 1 and 3, we have
\[ {\mathit{NLNMSM}^{(k)}}({\tilde{s}_{1}},{\tilde{s}_{2}},\dots ,{\tilde{s}_{n}})\geqslant {\mathit{NLNMSM}^{(k)}}({\tilde{s}_{1}^{-}},{\tilde{s}_{2}^{-}},\dots ,{\tilde{s}_{n}^{-}})={\tilde{s}^{-}},\]
\[ {\mathit{NLNMSM}^{(k)}}({\tilde{s}_{1}},{\tilde{s}_{2}},\dots ,{\tilde{s}_{n}})\leqslant {\mathit{NLNMSM}^{(k)}}({\tilde{s}_{1}^{+}},{\tilde{s}_{2}^{+}},\dots ,{\tilde{s}_{n}^{+}})={\tilde{s}^{+}}.\]
Thus the property is proved.  □
In addition, we give some special examples of the NLNMSM operator by different parameter k.
(1) When $k=1$, the NLNMSM operator in (23) will become the NLNA (NLN averaging) operator
(28)
\[\begin{aligned}{}{\mathit{NLNMSM}^{(1)}}({\tilde{s}_{1}},{\tilde{s}_{2}},\dots ,{\tilde{s}_{n}})=& {\bigg(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{1}}{\tilde{s}_{{i_{j}}}}}{{C_{n}^{1}}}\bigg)^{\frac{1}{1}}}\\ {} =& {s_{\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}\leqslant n}{u_{{i_{j}}}}}{n}+\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}\leqslant n}({u_{{i_{j}}}}+{v_{{i_{j}}}})}{n}-\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}\leqslant n}{u_{{i_{j}}}}}{n}\Big)I}}\\ {} =& {s_{\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}\leqslant n}{u_{{i_{j}}}}}{n}+\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}\leqslant n}{v_{{i_{j}}}}}{n}\Big)I}}\hspace{1em}(\text{let}\hspace{2.5pt}{i_{1}}=i)\\ {} =& \frac{1}{n}{\sum \limits_{i=1}^{n}}{\tilde{s}_{i}}\\ {} =& \mathit{NLNA}({\tilde{s}_{1}},{\tilde{s}_{2}},\dots ,{\tilde{s}_{n}}).\end{aligned}\]
(2) When $k=2$, the NLNMSM operator in (23) will become the NLNBM operator ($p=1$, $q=1$)
(29)
\[\begin{array}{l}\displaystyle {\mathit{NLNMSM}^{(2)}}({\tilde{s}_{1}},{\tilde{s}_{2}},\dots ,{\tilde{s}_{n}})\\ {} \displaystyle \hspace{1em}={\bigg(\frac{{\textstyle\sum _{1\leqslant {i_{1}}<{i_{2}}\leqslant n}}{\textstyle\textstyle\prod _{j=1}^{2}}{\tilde{s}_{{i_{j}}}}}{{C_{n}^{2}}}\bigg)^{\frac{1}{2}}}\\ {} \displaystyle \hspace{1em}={s_{{\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<{i_{2}}\leqslant n}{\textstyle\prod \limits_{j=1}^{2}}{u_{{i_{j}}}}}{{C_{n}^{2}}}\Big)^{\frac{1}{2}}}+\Big({\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<{i_{2}}\leqslant n}{\textstyle\prod \limits_{j=1}^{2}}({u_{{i_{j}}}}+{v_{{i_{j}}}})}{{C_{n}^{2}}}\Big)^{\frac{1}{2}}}-{\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<{i_{2}}\leqslant n}{\textstyle\prod \limits_{j=1}^{2}}{u_{{i_{j}}}}}{{C_{n}^{2}}}\Big)^{\frac{1}{2}}}\Big)I}}\\ {} \displaystyle \hspace{1em}=\frac{2}{k(k-1)}\sum \limits_{\genfrac{}{}{0pt}{}{i,j=1}{i\ne j}}{({\tilde{s}_{i}}{\tilde{s}_{j}})^{\frac{1}{2}}}={\mathit{NLNBM}^{(1,1)}}({\tilde{s}_{1}},{\tilde{s}_{2}},\dots ,{\tilde{s}_{n}}).\end{array}\]
(3) When $k=n$, the NLNMSM operator in (23) will become the NLNG operator
(30)
\[\begin{array}{l}\displaystyle {\mathit{NLNMSM}^{(n)}}({\tilde{s}_{1}},{\tilde{s}_{2}},\dots ,{\tilde{s}_{n}})\\ {} \displaystyle \hspace{1em}={\bigg(\frac{{\textstyle\sum _{1\leqslant {i_{1}}\prec \cdots <{i_{k}}\leqslant n}}{\textstyle\textstyle\prod _{j=1}^{n}}{\tilde{s}_{{i_{j}}}}}{{C_{n}^{n}}}\bigg)^{\frac{1}{n}}}\\ {} \displaystyle \hspace{1em}={s_{{\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{n}}{u_{{i_{j}}}}}{{C_{n}^{n}}}\Big)^{\frac{1}{n}}}+\Big({\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{n}}({u_{{i_{j}}}}+{v_{{i_{j}}}})}{{C_{n}^{n}}}\Big)^{\frac{1}{n}}}-{\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{n}}{u_{{i_{j}}}}}{{C_{n}^{n}}}\Big)^{\frac{1}{n}}}\Big)I}}\\ {} \displaystyle \hspace{1em}={s_{{\Big({\textstyle\prod \limits_{j=1}^{n}}{u_{{i_{j}}}}\Big)^{\frac{1}{n}}}+\Big({\Big({\textstyle\prod \limits_{j=1}^{n}}({u_{{i_{j}}}}+{v_{{i_{j}}}})\Big)^{\frac{1}{n}}}-{\Big({\textstyle\prod \limits_{j=1}^{n}}{u_{{i_{j}}}}\Big)^{\frac{1}{n}}}\Big)I}}\hspace{1em}(\text{let}\hspace{2.5pt}{i_{j}}=j)\\ {} \displaystyle \hspace{1em}={\prod \limits_{j=1}^{n}}{\tilde{s}_{j}^{\frac{1}{n}}}=\mathit{NLNG}({\tilde{s}_{1}},{\tilde{s}_{2}},\dots ,{\tilde{s}_{n}}).\end{array}\]

3.2 WNLNMSM Operator

Definition 8.
Let ${\tilde{s}_{1}},{\tilde{s}_{2}},\dots $ , and ${\tilde{s}_{n}}$ be NLNs, $\omega ={({\omega _{1}},{\omega _{2}},\dots ,{\omega _{n}})^{T}}$ be the weight vector of ${\tilde{l}_{i}}$ satisfying ${\omega _{i}}\in [0,1]$ $(i=1,2,\dots ,n)$ and ${\textstyle\sum _{i=1}^{n}}{\omega _{i}}=1$, then the WNLNMSM operator is presented as follows:
(31)
\[ {\mathit{WNLNMSM}^{(k)}}({\tilde{s}_{1}},{\tilde{s}_{2}},\dots ,{\tilde{s}_{n}})={\bigg(\frac{{\textstyle\sum _{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}}{\textstyle\textstyle\prod _{j=1}^{k}}{w_{{i_{j}}}}{\tilde{s}_{{i_{j}}}}}{{C_{n}^{k}}}\bigg)^{\frac{1}{k}}},\]
where $({i_{1}},{i_{2}},\dots ,{i_{k}})$ represents k integrated values of $(1,2,\dots ,n)$, ${C_{n}^{k}}$ denotes the binomial coefficient.
Theorem 2.
Let ${\tilde{s}_{1}},{\tilde{s}_{2}},\dots $ , and ${\tilde{s}_{n}}$ be NLNs, where ${\tilde{s}_{i}}={s_{{u_{i}}+{v_{i}}I}}$, and $\omega ={({\omega _{1}},{\omega _{2}},\dots ,{\omega _{n}})^{T}}$ be the weight vector of ${\tilde{s}_{i}}$ satisfying ${\omega _{i}}\in [0,1]$ $(i=1,2,\dots ,n)$ and ${\textstyle\sum _{i=1}^{n}}{\omega _{i}}=1$. The aggregated result of NLNs ${\tilde{s}_{1}},{\tilde{s}_{2}},\dots $ , and ${\tilde{s}_{n}}$ obtained by the WNLNMSM operator in (31) is also a NLN, shown as follows:
(32)
\[\begin{array}{l}\displaystyle {\mathit{WNLNMSM}^{(k)}}({\tilde{s}_{1}},{\tilde{s}_{2}},\dots ,{\tilde{s}_{n}})\\ {} \displaystyle \hspace{2.5pt}=s\hspace{-0.1667em}{\hspace{-0.1667em}_{{\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{k}}{w_{{i_{j}}}}{u_{{i_{j}}}}}{{C_{n}^{k}}}\Big)^{\frac{1}{k}}}+\Big({\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{k}}({w_{{i_{j}}}}{u_{{i_{j}}}}+{w_{{i_{j}}}}{v_{{i_{j}}}})}{{C_{n}^{k}}}\Big)^{\frac{1}{k}}}-{\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{k}}{w_{{i_{j}}}}{u_{{i_{j}}}}}{{C_{n}^{k}}}\Big)^{\frac{1}{k}}}\Big)I}}.\end{array}\]
On basis of Eqs. (14)–(19), the WNLNMSM operator has the same characteristics as follows:
Property 5 (Idempotency).
Let ${\tilde{s}_{1}},{\tilde{s}_{2}},\dots $ , and ${\tilde{s}_{n}}$ all are equal, then
(33)
\[ {\mathit{WNLNMSM}^{(k)}}(\tilde{s},\tilde{s},\dots ,\tilde{s})=\tilde{s}.\]
Property 6 (Commutativity).
Let ${\tilde{s}_{1}},{\tilde{s}_{2}},\dots $ , and ${\tilde{s}_{n}}$ be NLNs, and ${\tilde{s}^{\prime }_{1}},{\tilde{s}^{\prime }_{2}},\dots $ , and ${\tilde{s}^{\prime }_{n}}$ is any permutation of ${\tilde{s}_{1}},{\tilde{s}_{2}},\dots \hspace{0.1667em}$ , and ${\tilde{s}_{n}}$, then
(34)
\[ {\mathit{WNLNMSM}^{(k)}}({\tilde{s}^{\prime }_{1}},{\tilde{s}^{\prime }_{2}},\dots ,{\tilde{s}^{\prime }_{n}})={\mathit{WNLNMSM}^{(k)}}({\tilde{s}_{1}},{\tilde{s}_{2}},\dots ,{\tilde{s}_{n}}).\]
Property 7 (Monotonicity).
Let ${\tilde{s}_{1}},{\tilde{s}_{2}},\dots $ , and ${\tilde{s}_{n}}$ be NLNs, where ${\tilde{s}_{i}}={s_{{u_{i}}+{v_{i}}I}}$, and $i=1,2,\dots ,n$, and let ${\tilde{g}_{1}},{\tilde{g}_{2}},\dots \hspace{0.1667em}$ , and ${\tilde{g}_{n}}$ be NLNs, where ${\tilde{g}_{i}}={s_{{t_{i}}+{f_{i}}I}}$ and $i=1,2,\dots ,n$, which meet the condition ${u_{i}}\leqslant {t_{i}}$, ${v_{i}}\leqslant {f_{i}}$ for all i, then
(35)
\[ {\mathit{WNLNMSM}^{(k)}}({\tilde{s}_{{\alpha _{1}}}},{\tilde{s}_{{\alpha _{2}}}},\dots ,{\tilde{s}_{{\alpha _{n}}}})\leqslant {\mathit{WNLNMSM}^{(k)}}({\tilde{s}_{{\beta _{1}}}},{\tilde{s}_{{\beta _{2}}}},\dots ,{\tilde{s}_{{\beta _{n}}}}).\]
Property 8 (Boundedness).
Suppose ${\tilde{s}^{-}}=\min ({\tilde{s}_{1}},{\tilde{s}_{2}},\dots ,{\tilde{s}_{n}})$, ${\tilde{s}^{+}}=\max ({\tilde{s}_{1}},{\tilde{s}_{2}},\dots ,{\tilde{s}_{n}})$ then
(36)
\[ {\tilde{s}^{-}}\leqslant {\mathit{WNLNMSM}^{(k)}}({\tilde{s}_{1}},{\tilde{s}_{2}},\dots ,{\tilde{s}_{n}})\leqslant {\tilde{s}^{+}}.\]
Similar to Property 1–4 of NLNMSM, the proofs of the above Property 5–8 are omitted.

4 The Objective Weights Model Based on Entropy Measure

The DMs usually give weights of attribute values from a subjective view, but this kind of judgement often has subjectivity and blindness due to the limitation of knowledge structure, personal bias and so on. It’s important to apply appropriate weight of attributes during the decision process, so it is necessary to develop a weight determination model when weights of attribute values are fullly unknown. Then we develop an objective weight model based on entropy of NLNs. The steps are as follows.
Firstly, a standardized decision matrix $D={[{\tilde{s}_{ij}}]_{m\times n}}={[{s_{{u_{ij}}+{v_{ij}}I}}]_{m\times n}}$ for $R={[{r_{ij}}]_{m\times n}}={[{s_{{u^{\prime }_{ij}}+{v^{\prime }_{ij}}I}}]_{m\times n}}$,
(37)
\[ {r_{ij}}=\left\{\begin{array}{l@{\hskip4.0pt}l}{\tilde{s}_{ij}}/{\tilde{s}_{j}^{+}}\hspace{1em}& (\text{for benefit attribute}),\\ {} {\tilde{s}_{j}^{-}}/{\tilde{s}_{ij}}\hspace{1em}& (\text{for cost attribute}),\end{array}\right.\]
where ${\tilde{s}_{j}^{+}}={\max _{i}}({\tilde{s}_{ij}})$, ${\tilde{s}_{j}^{-}}={\min _{i}}({\tilde{s}_{ij}})$.
Next, a transformed decision matrix $R={[{r_{ij}}]_{m\times n}}={[{s_{{u^{\prime }_{ij}}+{v^{\prime }_{ij}}I}}]_{m\times n}}$ into $X={[{\tilde{x}_{ij}}]_{m\times n}}$ by the expected value of the NLN, and have
(38)
\[ {\tilde{x}_{ij}}=\frac{({u^{\prime }_{ij}}+{v^{\prime }_{ij}}\times {I^{l}})+({u^{\prime }_{ij}}+{v^{\prime }_{ij}}\times {I^{u}})}{2(t-1)}.\]
Then, calculate the entropy values for the jth attribute is
(39)
\[ {H_{j}}=-\frac{1}{\ln m}{\sum \limits_{i=1}^{m}}{\tilde{x}_{ij}}\ln {\tilde{x}_{ij}}.\]
Finally, the attribute weights can be gotten:
(40)
\[ {w_{j}}=\frac{1-{H_{j}}}{n-{\textstyle\textstyle\sum _{j=1}^{n}}{H_{j}}}.\]

5 An Approach to Group Decision Making with the WNLNMSM Operator

In this section, we will give decision steps for the MAGDM problems based on the WNLNMSM operators.
Let $R=\{{R_{1}},{R_{2}},\dots ,{R_{m}}\}$ be a set of alternatives, $M=\{{M_{1}},{M_{2}},\dots ,{M_{p}}\}$ be the set of DMs and $\lambda ={({\lambda _{1}},{\lambda _{2}},\dots ,{\lambda _{p}})^{T}}$ be the weight vector of DMs ${M_{l}}$ $(l=1,2,\dots ,p)$. Let $U=\{{U_{1}},{U_{2}},\dots ,{U_{n}}\}$ be the set of attributes and suppose the attributes weights are unknown.
The lth $(l=1,2,\dots ,p)$ DM uses NLN ${\tilde{s}_{ij}^{l}}={s_{{u_{ij}}+{v_{ij}}I}}$ (${u_{ij}^{l}},{v_{ij}^{l}}\in R$) to evaluate the attribute ${U_{j}}$ $(j=1,2,\dots ,n)$ of the alternative ${R_{i}}$ $(i=1,2,\dots ,m)$. Now, the lth NLN decision matrix ${M^{l}}$ can be obtained:
\[ {M^{l}}=\left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c}{\tilde{s}_{11}^{l}}& {\tilde{s}_{12}^{l}}& \cdots & {\tilde{s}_{1n}^{l}}\\ {} {\tilde{s}_{21}^{l}}& {\tilde{s}_{22}^{l}}& \cdots & {\tilde{s}_{2n}^{l}}\\ {} \vdots & \vdots & \ddots & \vdots \\ {} {\tilde{s}_{n1}^{l}}& {\tilde{s}_{n2}^{l}}& \cdots & {\tilde{s}_{nn}^{l}}\end{array}\right].\]
Based on this information, the best selection should be given. Then we give specific decision steps as follows:
Step 1: Based on Eq. (32), aggregate the attribute values from decision matrix ${M^{l}}={({\tilde{s}_{ij}^{l}})_{m\times n}}$ $(l=1,2,\dots ,p)$ by WNLNMSM operator,
(41)
\[\begin{array}{l}\displaystyle {\tilde{s}_{ij}}={\mathit{WNLNMSM}^{(k)}}({\tilde{s}_{ij}^{1}},{\tilde{s}_{ij}^{2}},\dots ,{\tilde{s}_{ij}^{p}})\\ {} \displaystyle \hspace{1em}={s_{\begin{array}{l}{\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{k}}{w_{{i_{j}}}^{l}}{u_{{i_{j}}}^{l}}}{{C_{n}^{k}}}\Big)^{\frac{1}{k}}}+\Big({\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{k}}({w_{{i_{j}}}^{l}}{u_{{i_{j}}}^{l}}+{w_{{i_{j}}}^{l}}{v_{{i_{j}}}^{l}})}{{C_{n}^{k}}}\Big)^{\frac{1}{k}}}\\ {} \hspace{1em}-{\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{k}}{w_{{i_{j}}}^{l}}{u_{{i_{j}}}^{l}}}{{C_{n}^{k}}}\Big)^{\frac{1}{k}}}\Big)I.\end{array}}}\end{array}\]
Step 2: Obtain attribute weight vector ${\omega _{j}}={({\omega _{1}},{\omega _{2}},\dots ,{\omega _{n}})^{T}}$ of the $\{{U_{1}},{U_{2}},\dots ,{U_{n}}\}$ by Eqs. (37)–(40).
Step 3: Based on Eq. (32), obtain the collective evaluation information of alternative ${R_{i}}$.
(42)
\[\begin{aligned}{}{\tilde{s}_{i}}=& {\mathit{WNLNMSM}^{(k)}}({\tilde{s}_{i1}},{\tilde{s}_{i2}},\dots ,{\tilde{s}_{in}})\\ {} =& {s_{\begin{array}{l}{\Big(\frac{\textstyle\sum \limits_{1\leqslant {j_{1}}<\cdots <{j_{k}}\leqslant n}{\textstyle\prod \limits_{q=1}^{k}}{w_{{j_{q}}}}{u_{i{j_{q}}}}}{{C_{n}^{k}}}\Big)^{\frac{1}{k}}}+\Big({\Big(\frac{\textstyle\sum \limits_{1\leqslant {j_{1}}<\cdots <{j_{k}}\leqslant n}{\textstyle\prod \limits_{q=1}^{k}}({w_{{j_{q}}}}{u_{i{j_{q}}}}+{w_{{j_{q}}}}{v_{i{j_{q}}}})}{{C_{n}^{k}}}\Big)^{\frac{1}{k}}}\\ {} \hspace{1em}-{\Big(\frac{\textstyle\sum \limits_{1\leqslant {j_{1}}<\cdots <{j_{k}}\leqslant n}{\textstyle\prod \limits_{q=1}^{k}}{w_{{j_{q}}}}{u_{i{j_{q}}}}}{{C_{n}^{k}}}\Big)^{\frac{1}{k}}}\Big)I,\end{array}}}\end{aligned}\]
where $i=1,2,\dots ,m$.
Step 4: Transform a NLN ${\tilde{s}_{i}}$ $(i=1,2,\dots ,m)$ into an interval NLN ${\tilde{s}_{i}}={s_{{u_{i}}+{v_{i}}I}}\in {s_{[{u_{i}}+{v_{i}}\times {I^{L}},{u_{i}}+{v_{i}}\times {I^{H}}]}}$. Then, the expected value of $\mathit{EX}({\tilde{s}_{i}})$ $(i=1,2,\dots ,m)$ is obtained by Eq. (20).
Step 5: Rank ${\tilde{s}_{i}}$ $(i=1,2,\dots ,m)$ in terms of $EX({\tilde{s}_{i}})$ of NLNs and the comparison method in Definition 5.

6 Numerical Examples

In this section, two practical examples (adopted from Ye, 2016c, 2017) are demonstrated to illustrate the proposed MAGDM method. In order to better show the superiority of the proposed method, we will suppose the attribute weights are unknown and revise some data to suit corresponding needs.
Example 2 (See adopted from Ye, 2016c).
Suppose there are four manufacturing alternatives $R=\{{R_{1}},{R_{2}},{R_{3}},{R_{4}}\}$ in a flexible manufacturing system, and there are three attributes to be considered: (1) ${U_{1}}$: quality; (2) ${U_{2}}$: the market prospect; (3) ${U_{3}}$: technique. A group of three DMs (DM${_{1}}$, DM${_{2}}$, DM${_{3}}$) gives evaluation information based on the LTs
\[\begin{aligned}{}S=& \{{s_{0}}=\text{worst},{s_{1}}=\text{worse},{s_{2}}=\text{bad},{s_{3}}=\text{medium},{s_{4}}=\text{good},{s_{5}}=\text{better},\\ {} & {s_{6}}=\text{excellent}\},\end{aligned}\]
and the weight vector of the three DMs is $\lambda =(0.30,0.36,0.34)$. During the decision process, we consider the variation range of indeterminacy I to reflect DMs’ preference. So we suppose the lower limit of I is ${I^{L}}=0$ and the upper limit of I is ${I^{H}}=0.1$. DM${_{p}}$ $(p=1,2,3)$ gives the attribute values on the alternative ${R_{i}}$ $(i=1,2,3,4)$ by NLNs, and three NLN decision matrices are listed as follows
\[\begin{array}{l}\displaystyle {M^{1}}=\left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c}{s_{5}}& {s_{4+I}}& {s_{3+I}}\\ {} {s_{4}}& {s_{5}}& {s_{4+I}}\\ {} {s_{5+I}}& {s_{4+2I}}& {s_{4+I}}\\ {} {s_{5}}& {s_{4+I}}& {s_{5+2I}}\end{array}\right],\hspace{2em}{M^{2}}=\left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c}{s_{4+I}}& {s_{5}}& {s_{3}}\\ {} {s_{5}}& {s_{4+I}}& {s_{3+I}}\\ {} {s_{5}}& {s_{4+I}}& {s_{4}}\\ {} {s_{4+I}}& {s_{5}}& {s_{5+I}}\end{array}\right],\\ {} \displaystyle {M^{3}}=\left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c}{s_{5+I}}& {s_{4}}& {s_{3+I}}\\ {} {s_{4+I}}& {s_{4}}& {s_{3+I}}\\ {} {s_{5+I}}& {s_{5}}& {s_{4+I}}\\ {} {s_{4+2I}}& {s_{4+I}}& {s_{4}}\end{array}\right].\end{array}\]
The goal is to select the best flexible manufacturing system.

6.1 The Procedure of Decision-Making Based on the WNLNMSM Operator

To distinguish the most desirable alternative(s), the concrete steps are taken as following:
Step 1: Based on Eq. (41), aggregate the attribute values from each DM by WNLNMSM operator. We take $k=2$ in here, and the aggregated matrix M is shown as follows:
\[ M=\left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c}{s_{1.54+0.23I}}& {s_{1.44+0.11I}}& {s_{1.00+0.21I}}\\ {} {s_{1.44+0.11I}}& {s_{1.43+0.12I}}& {s_{1.10+0.33I}}\\ {} {s_{1.66+0.21I}}& {s_{1.44+0.33I}}& {s_{1.33+0.21I}}\\ {} {s_{1.43+0.34I}}& {s_{1.44+0.22I}}& {s_{1.55+0.31I}}\end{array}\right]\]
Step 2: Obtain attribute weight vector ${\omega _{j}}={({\omega _{1}},{\omega _{2}},\dots ,{\omega _{n}})^{T}}$ of the $\{{C_{1}},{C_{2}},\dots ,{C_{n}}\}$ by Eqs. (37)–(39), and we can get
\[\begin{array}{l}\displaystyle R=\left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c}{s_{0.93+0.02I}}& {s_{1.00-0.07I}}& {s_{1}}\\ {} {s_{0.87-0.04I}}& {s_{0.99-0.06I}}& {s_{0.91}}\\ {} {s_{1}}& {s_{1.00+0.06I}}& {s_{0.75+0.03I}}\\ {} {s_{0.86+0.08I}}& {s_{1}}& {s_{0.65+0.01I}}\end{array}\right],\\ {} \displaystyle X=\left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c}0.15& 0.17& 0.17\\ {} 0.14& 0.17& 0.15\\ {} 0.17& 0.17& 0.13\\ {} 0.14& 0.24& 0.26\end{array}\right],\\ {} \displaystyle {H_{j}}={(0.83,0.89,0.86)^{T}},\\ {} \displaystyle {\omega _{j}}={(0.41,0.26,0.33)^{T}}.\end{array}\]
Step 3: Based on Eq. (42), obtain the collective evaluation value of each alternative ($k=2$), we have
\[ {\tilde{s}_{1}}={s_{0.436+0.062I}},\hspace{1em}{\tilde{s}_{2}}={s_{0.436+0.064I}},\hspace{1em}{\tilde{s}_{3}}={s_{0.490+0.082I}},\hspace{1em}{\tilde{s}_{4}}={s_{0.487+0.097I}}.\]
Step 4: Calculate expectations about each ${\tilde{s}_{i}}$ $(i=1,2,3,4)$, then by applying Eq. (20), we have
\[\begin{array}{l}\displaystyle EX({\tilde{s}_{1}})=0.07315,\hspace{2em}EX({\tilde{s}_{2}})=0.07316,\\ {} \displaystyle EX({\tilde{s}_{3}})=0.08231,\hspace{2em}EX({\tilde{s}_{4}})=0.08200.\end{array}\]
Step 5: Rank the alternatives. Since $EX({\tilde{s}_{3}})\succ EX({\tilde{s}_{4}})\succ EX({\tilde{s}_{2}})\succ EX({\tilde{s}_{1}})$, the ranking result is ${R_{3}}\succ {R_{4}}\succ {R_{2}}\succ {R_{1}}$.

6.2 Discussion of the Influence of Parameters

6.2.1 The Influence of the Indeterminate Ranges for I in NLNs

Because NLNs can express the uncertain information by LVs which can cope with the difficulty of existing linguistic format, it will be necessary to consider how the change of the ranking of alternatives with different indeterminate range for I is. For conveniency, we take different variation ranges of the indeterminacy I by Steps 3–6, and the results are listed in Table 2.
Table 2
Ranking results by utilizing the different ranges for I in NLNs.
I $E({\tilde{s}_{i}})$ Ranking
$I\in [-0.7,0]$ $EX({\tilde{s}_{1}})=0.0690$, $EX({\tilde{s}_{2}})=0.0689$, $EX({\tilde{s}_{3}})=0.0769$, $EX({\tilde{s}_{4}})=0.0755$ ${R_{3}}\succ {R_{4}}\succ {R_{1}}\succ {R_{2}}$
$I\in [-0.5,0]$ $EX({\tilde{s}_{1}})=0.0700$, $EX({\tilde{s}_{2}})=0.0699$, $EX({\tilde{s}_{3}})=0.0782$, $EX({\tilde{s}_{4}})=0.0771$ ${R_{3}}\succ {R_{4}}\succ {R_{1}}\succ {R_{2}}$
$I\in [-0.3,0]$ $EX({\tilde{s}_{1}})=0.0711$, $EX({\tilde{s}_{2}})=0.0710$, $EX({\tilde{s}_{3}})=0.0796$, $EX({\tilde{s}_{4}})=0.0788$ ${R_{3}}\succ {R_{4}}\succ {R_{1}}\succ {R_{2}}$
$I\in [-0.1,0]$ $EX({\tilde{s}_{1}})=0.07211$, $EX({\tilde{s}_{2}})=0.07210$, $EX({\tilde{s}_{3}})=0.081$, $EX({\tilde{s}_{4}})=0.08$ ${R_{3}}\succ {R_{4}}\succ {R_{1}}\succ {R_{2}}$
$I=0$ $EX({\tilde{s}_{1}})=0.072632$, $EX({\tilde{s}_{2}})=0.072631$, $EX({\tilde{s}_{3}})=0.0816$, $EX({\tilde{s}_{4}})=0.0812$ ${R_{3}}\succ {R_{4}}\succ {R_{1}}\succ {R_{2}}$
$I\in [0,0.1]$ $EX({\tilde{s}_{1}})=0.07315$, $EX({\tilde{s}_{2}})=0.07316$, $EX({\tilde{s}_{3}})=0.0823$, $EX({\tilde{s}_{4}})=0.0820$ ${R_{3}}\succ {R_{4}}\succ {R_{2}}\succ {R_{1}}$
$I\in [0,0.3]$ $EX({\tilde{s}_{1}})=0.07419$, $EX({\tilde{s}_{2}})=0.07422$, $EX({\tilde{s}_{3}})=0.0837$, $EX({\tilde{s}_{4}})=0.0836$ ${R_{3}}\succ {R_{4}}\succ {R_{2}}\succ {R_{1}}$
$I\in [0,0.5]$ $EX({\tilde{s}_{1}})=0.0752$, $EX({\tilde{s}_{2}})=0.0753$, $EX({\tilde{s}_{3}})=0.0850$, $EX({\tilde{s}_{4}})=0.0852$ ${R_{3}}\succ {R_{4}}\succ {R_{2}}\succ {R_{1}}$
$I\in [0,0.7]$ $EX({\tilde{s}_{1}})=0.07625$, $EX({\tilde{s}_{2}})=0.07634$, $EX({\tilde{s}_{3}})=0.0864$, $EX({\tilde{s}_{4}})=0.0869$ ${R_{3}}\succ {R_{4}}\succ {R_{2}}\succ {R_{1}}$
From Table 2, it is obvious that there are different ranking orders where ${R_{3}}\succ {R_{4}}\succ {R_{1}}\succ {R_{2}}$ from $I\in [-0.7,0]$ to $I=0$ and ${R_{3}}\succ {R_{4}}\succ {R_{2}}\succ {R_{1}}$ from $I=0$ to $I\in [0,0.3]$, and then ${R_{4}}\succ {R_{3}}\succ {R_{2}}\succ {R_{1}}$ from $I\in [0,0.5]$ to $I\in [0,0.7]$. The illustrative example shows that there are different ranking results of alternatives with different variation ranges of the indeterminacy I of NLNs. In addition, if there is no indeterminacy I in NLNs, that is $I=0$, our proposed MAGDM method will reduce to classical one with LVs. Therefore, the prominent advantage of the proposed method is it can successfully solve the decision problems with NLN information (uncertain linguistic information). In such case, our proposed method can give a more general and more suitable way to express the DMs’ preference by assigning different ranges of indeterminacy I during the decision process. Therefore, the ranking result we obtain is more scientific and rational due to considering DMs’ preference in real decision problems.

6.2.2 The Influence of the Parameter k in WNLNMSM Operator

When DMs take different parameter k, there are different significance and results. So we take different parameter k to rank the alternatives which reflect the influence of the parameter k. The ranking results can be obtained in Table 3.
Table 3
Ranking results by utilizing the different k.
$EX({\tilde{s}_{1}})$ $EX({\tilde{s}_{2}})$ $EX({\tilde{s}_{3}})$ $EX({\tilde{s}_{4}})$ Ranking
$k=1$ 0.07485 0.07421 0.08379 0.08268 ${R_{3}}\succ {R_{4}}\succ {R_{1}}\succ {R_{2}}$
$k=2$ 0.07315 0.07316 0.08231 0.08200 ${R_{3}}\succ {R_{4}}\succ {R_{2}}\succ {R_{1}}$
$k=3$ 0.07180 0.07231 0.08108 0.08125 ${R_{4}}\succ {R_{3}}\succ {R_{2}}\succ {R_{1}}$
As we see from Table 3, the ranking result of the alternatives will change when the parameter k changes, which shows how flexible the WNLNMSM operator can be. When $k=1$, there will be no interrelationship among the attributes and the proposed operator will be only a simple arithmetic average operator; when $k=2$, we will consider the interrelationships between any two attributes which are similar to BM or HM operators; when k=3, we will consider the interrelationships among any three attributes. So we can know that the WNLNMSM operator is more general than the other operators. As Qin and Liu (2014) stated, the parameter k reflects the DMs’ subjective preferences. If the DM prefers risk, he will take a larger parameter; otherwise, he may select a smaller parameter. In other words, it is more effective and necessary for DMs to adopt an appropriate parameter k based on their risk decision. Because the DMs are usually risk neutral and we need to fully consider the interactions of the individual arguments, we usually select $k=[n/2]$ in practical decision problems (Qin and Liu, 2014), where symbol [ ] is the round function and n is the number of aggregated arguments.

6.3 Comparative Analysis

This example is fully cited from reference (Ye, 2016c) with the same data and the attribute weights, so we can compare it with the result in Ye (2016c). The initial information is shown as follows:
\[\begin{array}{l}\displaystyle {M^{1}}=\left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c}{s_{5}}& {s_{4+I}}& {s_{3+I}}\\ {} {s_{4}}& {s_{5}}& {s_{4+I}}\\ {} {s_{4+I}}& {s_{4+I}}& {s_{4}}\\ {} {s_{5}}& {s_{4+I}}& {s_{4}}\end{array}\right],\hspace{2em}{M^{2}}=\left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c}{s_{4+I}}& {s_{5}}& {s_{3}}\\ {} {s_{5}}& {s_{4}}& {s_{3+I}}\\ {} {s_{5}}& {s_{4+I}}& {s_{4}}\\ {} {s_{4+I}}& {s_{5}}& {s_{5+I}}\end{array}\right],\\ {} \displaystyle {M^{3}}=\left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c}{s_{5+I}}& {s_{4}}& {s_{3+I}}\\ {} {s_{4+I}}& {s_{4}}& {s_{3}}\\ {} {s_{5}}& {s_{5}}& {s_{4+I}}\\ {} {s_{4}}& {s_{4+I}}& {s_{4}}\end{array}\right].\end{array}\]
The weight vector of the three DMs is $\lambda ={(0.30,0.36,0.34)^{T}}$ and the weight vector of the three attributes is $V={(0.20,0.50,0.30)^{T}}$.
Firstly, we will use the proposed method based on WNLNMSM operator to get the ranking result. Now, aggregate the evaluation information of DMs by WNLNMSM operator in formula (41). We take $k=2$, and we get an integrated matrix M based on above decision matrices as follows:
\[ M=\left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c}{s_{1.54+0.23I}}& {s_{1.44+0.11I}}& {s_{1.00+0.21I}}\\ {} {s_{1.44+0.11I}}& {s_{1.43}}& {s_{1.10+0.21I}}\\ {} {s_{1.56+0.11I}}& {s_{1.44+0.22I}}& {s_{1.33+0.11I}}\\ {} {s_{1.43+0.12I}}& {s_{1.44+0.22I}}& {s_{1.44+0.10I}}\end{array}\right].\]
Next, based on formula (42), we obtain the collective evaluation value of overall alternatives, suppose $k=2$, we have
\[ {\tilde{s}_{1}}={s_{0.421+0.057I}},\hspace{1em}{\tilde{s}_{2}}={s_{0.423+0.034I}},\hspace{1em}{\tilde{s}_{3}}={s_{0.460+0.050I}},\hspace{1em}{\tilde{s}_{4}}={s_{0.463+0.049I}}.\]
Then, calculate expectations about each ${\tilde{s}_{i}}$ $(i=1,2,3,4)$, then by applying Eq. (20), we have
\[\begin{array}{l}\displaystyle EX({\tilde{s}_{1}})=0.0706,\hspace{2em}EX({\tilde{s}_{2}})=0.0709,\\ {} \displaystyle EX({\tilde{s}_{3}})=0.0771,\hspace{2em}EX({\tilde{s}_{4}})=0.0775.\end{array}\]
Finally, we obtain the ranking result ${R_{4}}\succ {R_{3}}\succ {R_{2}}\succ {R_{1}}$, which is the same as the result in Ye (2016c). This shows the effectiveness of the proposed method.
The weights of attribute values play an important role in typical MADM methods. There exist two types of attribute weights: subjective weights and objective weights. The subjective weights in Ye (2016c) are usually determined on the basis of the DMs’ preference or judgements while the proposed method in this paper utilizes the information entropy model to determinate the attribute weights which can effectively relieve the subjective influence from DMs. Therefore, the experimental result of the proposed method is more subtle than of the method based on the initial information in Ye (2016c). Because the objective weight method can avoid the subjectivity due to the DM’s personal bias, it’s necessary to develop an objective entropy weight method to distribute the weight under such circumstances.
With the NLNs, whatever the method is based on subjective weight or objective entropy weight, they both can make full use of the decision information, including the determinate part and the indeterminate part. Furthermore, the WNLNMSM operator can consider the interactions among the multi-attributes, and produce more reasonable ranking results. In a word, these two methods can better express uncertain linguistic information under linguistic decision-making environments, which results in much more reasonable decisions.
For the purpose of comparison, we discuss the case by extending the MSM operator with the LVs called WLMSM operator. That is, there is no indeterminacy I of NLNs (i.e. $I=0$), therefore, this MAGDM method becomes the classical LVs. In this case, the decision matrix can be constructed as follows:
\[\begin{array}{l}\displaystyle {M^{1}}=\left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c}{s_{5.0}}& {s_{4.0}}& {s_{3.0}}\\ {} {s_{4.0}}& {s_{5.0}}& {s_{4.0}}\\ {} {s_{4.0}}& {s_{4.0}}& {s_{4.0}}\\ {} {s_{5.0}}& {s_{4.0}}& {s_{4.0}}\end{array}\right],\hspace{2em}{M^{2}}=\left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c}{s_{4.0}}& {s_{5.0}}& {s_{3.0}}\\ {} {s_{5.0}}& {s_{4.0}}& {s_{3.0}}\\ {} {s_{5.0}}& {s_{4.0}}& {s_{4.0}}\\ {} {s_{4.0}}& {s_{5.0}}& {s_{5.0}}\end{array}\right],\\ {} \displaystyle {M^{3}}=\left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c}{s_{5.0}}& {s_{4.0}}& {s_{3.0}}\\ {} {s_{4.0}}& {s_{4.0}}& {s_{3.0}}\\ {} {s_{5.0}}& {s_{5.0}}& {s_{4.0}}\\ {} {s_{4.0}}& {s_{4.0}}& {s_{4.0}}\end{array}\right].\end{array}\]
The decision making Steps 1–3 are similar to the procedure of the WNLNMSM operator, then we can obtain the result ${\tilde{s}_{1}}={s_{0.444}}$, ${\tilde{s}_{2}}={s_{0.442}}$, ${\tilde{s}_{3}}={s_{0.478}}$, ${\tilde{s}_{4}}={s_{0.475}}$. Clearly, the ranking order is ${R_{3}}\succ {R_{4}}\succ {R_{1}}\succ {R_{2}}$ according to the comparison rule of LVs, which is almost the same as WNLNMSM operator, because the determinate part of decision matrix is identical to the initial decision matrix when $I=0$. On the other hand, we can see the indeterminacy I plays an important role in NLNs, particularly when we handle uncertain, inconsistent and imperfect information in the decision process. DMs can choose the different indeterminate degrees ranges for I on the basis of their preference in real decision-making situation. In a word, NLNs can depict more comprehensive information than LVs.
NLNs can depict the uncertain and imperfect information by LVs. Therefore, to highlight the priority of NLNs, we use the proposed method to cope with an investment problem from Ye (2017).
Example 3.
A company plans to develop a new investment and there are four possible alternatives and there are three criteria shown as follows (suppose the weight vector is $W={(0.35,0.25,0.4)^{T}}$): the risk management (${C_{1}}$), the growth development (${C_{2}}$), the social environmental (${C_{3}}$). Three DMs (suppose the weight vector is $\lambda ={(0.37,0.33,0.3)^{T}}$) gives the evaluation information of the alternatives $\{{R_{1}},{R_{2}},{R_{3}},{R_{4}}\}$ with the criteria ${C_{j}}$ $(j=1,2,3)$ by LNNs based on the LTs: $S=\{{s_{0}}=\text{worst},{s_{1}}=\text{worse},{s_{2}}=\text{bad},{s_{3}}=\text{medium},{s_{4}}=\text{good},{s_{5}}=\text{better},{s_{6}}=\text{excellent}\}$. For simplicity, we omit the specific steps. Then we obtain the corresponding expected values of the collective evaluation value on each alternative ($k=2$): $EX({\tilde{s}_{1}})=0.0764$, $EX({\tilde{s}_{2}})=0.1049$, $EX({\tilde{s}_{3}})=0.0868$, $EX({\tilde{s}_{4}})=0.1039$, and the alternatives’ ranking is ${R_{2}}\succ {R_{4}}\succ {R_{3}}\succ {R_{1}}$, while the sorting result by Ye’s method (2017) is ${R_{4}}\succ {R_{2}}\succ {R_{3}}\succ {R_{1}}$.
Compared to the proposed approach, the approach in Ye (2017) is a traditional decision approach – the bidirectional projection model, which obtains an optimal alternative by calculating the bidirectional projection value between each alternative and the ideal solution. Although traditional decision approaches may adhere to the intuitive features of DMs through comparison between each alternative ${R_{i}}$, they do not consider the interactions among multiple attributes. As we know, the interrelationship between attributes is common, such as in Example 2, whether risk or growth in investment, there is a connection with social environment. Therefore, our proposed approach by using aggregation operator is closer to reality, especially since it can consider interactions among multiple attributes on the basis of DMs’ preference. In addition, the form of decision information by NLNs can depict more fuzzy contents than NNs. As a conclusion, the proposed approach is more suitable in practice than NNs.
Due to the operations of NLNs is similar to NNs’, we don’t take any transfer approach between NLNs and NNs in Example 2. Actually, there are 2-tuple linguistic approach or linguistic scale functions to realize the interconversion between linguistic information and numerical information. In the future research, we can try to improve operations of NLNs and develop new transformation between NLNs and NNs which can reflect the psychological process of DMs and reduce information loss.
In summary, because the proposed MAGDM method can consider the interactions among multiple attributes, and obtain attribute weights from an objective view, which is more general and more effective than existing approaches under neutrosophic linguistic environment. Furthermore, because NLNs can depict the uncertain and imperfect information by LVs, the proposed approach is more suitable in practice than other linguistic expression.

7 Conclusion

NLNs can better depict the uncertain, inconsistent and imperfect by LVs, and the traditional MSM operator can consider interactions among the multiple attributes. Combining the advantages of both, we extend the NLNs to the traditional MSM and make full use of their advantages in the practical applications. Firstly, we developed the NLNMSM and WNLNMSM operators, and discussed some characteristics and its special examples if the parameter k takes different values. Then, a novel MAGDM method based on the WNLNMSM operator is established in NLN setting. The main contribution of our developed method is that it can consider the interaction among the multiple attributes by the form of NLNs. In addition, the proposed MAGDM method under neutrosophic linguistic environment is more suitable than other existing methods because it easily depicts and handles the uncertain and imperfect linguistic information which widely exists in reality. Finally, a practical example of manufacturing alternative is given to detail the process of the proposed method. Based on this paper, we argue that in future research, other methods with NLNs, such as TOPSIS and ELECTRE of NLNs should be developed and applied in real MAGDM problems, especially when the uncertain information is in specified ranges. On the other hand, we can improve the basic theory of NLNs, such as operational rules, score function and so on. We can also extend the proposed operators to other various domains.

Acknowledgements

This paper is supported by the National Natural Science Foundation of China (Nos. 71771140, 71471172), the Special Funds of Taishan Scholars Project of Shandong Province (No. ts201511045), Shandong Provincial Social Science Planning Project (Nos. 17BGLJ04, 16CGLJ31 and 16CKJJ27), the Natural Science Foundation of Shandong Province (No. ZR2017MG007), and Key research and development program of Shandong Province (No. 2016GNC110016).

References

 
Cabrerizo, F.J., Ureña, M.R., Pedrycz, W., Herrera-Viedma, E. (2014). Building consensus in group decision making with an allocation of information granularity. Fuzzy Sets and Systems, 255, 115–127.
 
Guan, J., Zhou, D., Meng, F.Y. (2017). Distance measure and correlation coefficient for linguistic hesitant fuzzy sets and their application. Informatica, 28(2), 237–268.
 
Herrera, F., Herrera-Viedma, E. (1996). A model of consensus in group decision making under linguistic assessments. Fuzzy Sets and Systems, 78(1), 73–87.
 
Herrera, F., Herrera-Viedma, E. (2000). Linguistic decision analysis: steps for solving decision problems under linguistic information. Fuzzy Sets and Systems, 115(1), 67–82.
 
Herrera, F., Martínez, L. (2000). A 2-tuple fuzzy linguistic representation model for computing with words. IEEE Transactions on Fuzzy Systems, 8(6), 746–752.
 
Ju, Y., Liu, X., Yang, S. (2014). Trapezoid fuzzy 2-tuple linguistic aggregation operators and their applications to multiple attribute decision making. Journal of Intelligent & Fuzzy Systems, 27(3), 1219–1232.
 
Kong, L., Wu, Y., Ye, J. (2015). Misfre fault diagnosis method of gasoline engines using the cosine similarity measure of neutrosophic numbers. Neutrosophic Sets and Systems, 8, 42–45.
 
Li, C.C., Dong, Y., Herrera, F., Herrera-Viedma, E., Martínez, L. (2017). Personalized individual semantics in computing with words for supporting linguistic group decision making. An Application on Consensus reaching. Information Fusion, 33(1), 29–40.
 
Liu, P.D., Chen, S.M. (2017). Group decision making based on Heronian aggregation operators of intuitionistic fuzzy numbers. IEEE Transactions on Cybernetics, 47(9), 2514–2530.
 
Liu, P.D., Chen, S.M., Liu, J. (2017a). Some intuitionistic fuzzy interaction partitioned Bonferroni mean operators and their application to multi-attribute group decision making. Information Sciences, 411, 98–121.
 
Liu, W., Dong, Y., Chiclana, F., Cabrerizo, F.J., Herrera-Viedma, E. (2017b). Group decision-making based on heterogeneous preference relations with self-confidence. Fuzzy Optimization and Decision Making, 16(4), 429–447.
 
Liu, P.D., Shi, L. (2015). The generalized hybrid weighted average operator based on interval neutrosophic hesitant set and its application to multiple attribute decision making. Neural Computing and Applications, 26(2), 457–471.
 
Maclaurin, C. (1729). A second letter to Martin Folkes, Esq; concerning the roots of equations, with demonstration of other rules of algebra. Philosophical Transactions of the Royal Society of London A, 36, 59–96.
 
Meng, F. (2017). An approach to hesitant fuzzy group decision making with multi-granularity linguistic information. Informatica, 27(4).
 
Morente-Molinera, J.A., Mezei, J., Carlsson, C., Herrera-Viedma, E. (2017). Improving supervised learning classification methods using multi-granular linguistic modelling and fuzzy entropy. IEEE Transactions on Fuzzy Systems, 25(5), 1078–1089.
 
Mulliner, E., Malys, N., Maliene, V. (2015). Comparative analysis of MCDM methods for the assessment of sustainable housing affordability. Omega, 59, 146–156.
 
Ou, Y.C. (2016). Using a hybrid decision-making model to evaluate the sustainable development performance of high-tech listed companies. Journal of Business Economics and Management, 17(3), 331–346.
 
Peng, J.J., Wang, J.Q., Zhang, H.Y., Chen, X.H. (2014). An outranking approach for multi-criteria decision-making problems with simplifed neutrosophic sets. Applied Soft Computing, 25, 336–346.
 
Qin, J.D. (2017). Generalized pythagorean fuzzy Maclaurin symmetric means and its application to multiple attribute SIR group decision model. International Journal of Fuzzy Systems, 1, 1–15.
 
Qin, J.D., Liu, X.W. (2014). An approach to intuitionistic fuzzy multiple attribute decision making based on Maclaurin symmetric mean operators. Journal of Intelligent & Fuzzy Systems, 27(5), 2177–2190.
 
Smarandache, F. (1998). Neutrosophy: Neutrosophic Probability, Set, and Logic. American Research Press, Rehoboth (in USA).
 
Smarandache, F. (2013). Introduction to neutrosophic measure, neutrosophic integral, and neutrosophic probability. Computer Science, 22(1), 13–25.
 
Smarandache, F. (2014). Introduction to Neutrosophic Statistics. Sitech and. Education Publishing, Craiova-Columbus.
 
Smarandache, F. (2015). Symbolic Neutrosophic Theory. EuropaNova asbl. Bruxelles (in Belgium).
 
Stanujkic, D., Zavadskas, E.K., Karabasevic, D., Turskis, Z., Keršulienė, V. (2017). New group decision-making arcas approach based on the integration of the swara and the aras methods adapted for negotiations. Journal of Business Economics & Management, 18(4), 599–618.
 
Xu, Z.S. (2004). Uncertain linguistic aggregation operators based approach to multiple attribute group decision making under uncertain linguistic environment. Information Sciences, 168(1), 171–184.
 
Xu, Z.S. (2006a). A note on linguistic hybrid arithmetic averaging operator in multiple attribute group decision making with linguistic information. Group Decision and Negotiation, 15(6), 593–604.
 
Xu, Z.S. (2006b). Goal programming models for multiple attribute decision making under linguistic setting. Journal of Management Sciences in China, 9(2), 9–17.
 
Xu, Z.S., Yager, R.R. (2011). Intuitionistic fuzzy Bonferroni means. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 41(2), 568–578.
 
Ye, J. (2014). Clustering methods using distance-based similarity measures of single-valued neutrosophic sets. Journal of Intelligent Systems, 23(4), 379–389.
 
Ye, J. (2016a). Improved cosine similarity measures of simplifed neutrosophic sets for medical diagnoses. Artificial Intelligence in Medicine, 63(3), 171–179.
 
Ye, J. (2016b). Multiple-attribute group decision-making method under a neutrosophic number environment. Journal of Intelligent Systems, 25(3), 377–386.
 
Ye, J. (2016c). Aggregation operators of neutrosophic linguistic numbers for multiple attribute group decision making. Springer Plus, 5(1), 1691.
 
Ye, J. (2017). Bidirectional projection method for multiple attribute group decision making with neutrosophic numbers. Neural Computing and Applications, 28(5), 1021–1029.
 
Zadeh, L.A. (1975). The concept of a linguistic variable and its application to approximate reasoning. Part I. Information Science, 8(3), 199–249.
 
Zavadskas, E.K., Bausys, R., Juodagalviene, B., Garnyte-Sapranaviciene, I. (2017). Model for residential house element and material selection by neutrosophic MULTIMOORA method. Engineering Applications of Artificial Intelligence, 64, 315–324.

Biographies

Liu Peide
Peide.liu@gmail.com

P. Liu received the BS and MS degrees in signal and information processing from Southeast University, Nanjing, China, in 1988 and 1991, respectively, and the PhD degree in information management from Beijng Jiaotong University, Beijing, China, in 2010. He is currently a professor with the School of Management Science and Engineering, Shandong University of Finance and Economics, Shandong, China. He is an associate editor of the Journal of Intelligent and Fuzzy Systems, a member of the editorial board of the journal Technological and Economic Development of Economy, and a member of editorial boards of other 12 journals. He has authored or co-authored more than 250 publications. His research interests include aggregation operators, fuzzy logic, fuzzy decision making, and their applications.

You Xinli

X. You received the BS degree in electronic commerce management, Shandong University of Finance and Economics, Jilin, China, in 2016. Now she is studying for master’s degree in management science and engineering, Shandong University of Finance and Economics, Shandong, China. She has authored or co-authored 4 publications. Her research interests include aggregation operators, fuzzy logic, fuzzy decision making, and their applications.


Reading mode PDF XML

Table of contents
  • 1 Introduction
  • 2 Preliminaries
  • 3 Neutrosophic Linguistic MSM Aggregation Operators
  • 4 The Objective Weights Model Based on Entropy Measure
  • 5 An Approach to Group Decision Making with the WNLNMSM Operator
  • 6 Numerical Examples
  • 7 Conclusion
  • Acknowledgements
  • References
  • Biographies

Copyright
© 2018 Vilnius University
by logo by logo
Open access article under the CC BY license.

Keywords
neutrosophic linguistic numbers Maclaurin symmetric mean (MSM) operator MAGDM

Metrics
since January 2020
1199

Article info
views

680

Full article
views

569

PDF
downloads

248

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Figures
    1
  • Tables
    3
  • Theorems
    2
info1194_g001.jpg
Fig. 1
A neutrosophic number.
Table 1
The advantages and disadvantages of NLNs and NNs.
Table 2
Ranking results by utilizing the different ranges for I in NLNs.
Table 3
Ranking results by utilizing the different k.
Theorem 1.
Theorem 2.
info1194_g001.jpg
Fig. 1
A neutrosophic number.
Table 1
The advantages and disadvantages of NLNs and NNs.
Advantages Disadvantages
NNs The NNs can express uncertain, inconsistent, and imperfect information by a determinate part u and an indeterminate part $vI$; The indeterminate degree I can be assigned by DMs according to their preference or real requirements. It is difficult to express the complex determinate part, such that during a voting process, the results of voting include determinate part: five votes in favour and two votes against, indeterminate part: one absent vote.
NLNs The NLNs combine LVs and NNs so that they can more easily depict incompleteness, indeterminacy, and inconsistency than crisp numbers or fuzzy numbers; The indeterminate degree I can be assigned by DMs. The indeterminate part cannot distinguish the falsity-membership degree.
Table 2
Ranking results by utilizing the different ranges for I in NLNs.
I $E({\tilde{s}_{i}})$ Ranking
$I\in [-0.7,0]$ $EX({\tilde{s}_{1}})=0.0690$, $EX({\tilde{s}_{2}})=0.0689$, $EX({\tilde{s}_{3}})=0.0769$, $EX({\tilde{s}_{4}})=0.0755$ ${R_{3}}\succ {R_{4}}\succ {R_{1}}\succ {R_{2}}$
$I\in [-0.5,0]$ $EX({\tilde{s}_{1}})=0.0700$, $EX({\tilde{s}_{2}})=0.0699$, $EX({\tilde{s}_{3}})=0.0782$, $EX({\tilde{s}_{4}})=0.0771$ ${R_{3}}\succ {R_{4}}\succ {R_{1}}\succ {R_{2}}$
$I\in [-0.3,0]$ $EX({\tilde{s}_{1}})=0.0711$, $EX({\tilde{s}_{2}})=0.0710$, $EX({\tilde{s}_{3}})=0.0796$, $EX({\tilde{s}_{4}})=0.0788$ ${R_{3}}\succ {R_{4}}\succ {R_{1}}\succ {R_{2}}$
$I\in [-0.1,0]$ $EX({\tilde{s}_{1}})=0.07211$, $EX({\tilde{s}_{2}})=0.07210$, $EX({\tilde{s}_{3}})=0.081$, $EX({\tilde{s}_{4}})=0.08$ ${R_{3}}\succ {R_{4}}\succ {R_{1}}\succ {R_{2}}$
$I=0$ $EX({\tilde{s}_{1}})=0.072632$, $EX({\tilde{s}_{2}})=0.072631$, $EX({\tilde{s}_{3}})=0.0816$, $EX({\tilde{s}_{4}})=0.0812$ ${R_{3}}\succ {R_{4}}\succ {R_{1}}\succ {R_{2}}$
$I\in [0,0.1]$ $EX({\tilde{s}_{1}})=0.07315$, $EX({\tilde{s}_{2}})=0.07316$, $EX({\tilde{s}_{3}})=0.0823$, $EX({\tilde{s}_{4}})=0.0820$ ${R_{3}}\succ {R_{4}}\succ {R_{2}}\succ {R_{1}}$
$I\in [0,0.3]$ $EX({\tilde{s}_{1}})=0.07419$, $EX({\tilde{s}_{2}})=0.07422$, $EX({\tilde{s}_{3}})=0.0837$, $EX({\tilde{s}_{4}})=0.0836$ ${R_{3}}\succ {R_{4}}\succ {R_{2}}\succ {R_{1}}$
$I\in [0,0.5]$ $EX({\tilde{s}_{1}})=0.0752$, $EX({\tilde{s}_{2}})=0.0753$, $EX({\tilde{s}_{3}})=0.0850$, $EX({\tilde{s}_{4}})=0.0852$ ${R_{3}}\succ {R_{4}}\succ {R_{2}}\succ {R_{1}}$
$I\in [0,0.7]$ $EX({\tilde{s}_{1}})=0.07625$, $EX({\tilde{s}_{2}})=0.07634$, $EX({\tilde{s}_{3}})=0.0864$, $EX({\tilde{s}_{4}})=0.0869$ ${R_{3}}\succ {R_{4}}\succ {R_{2}}\succ {R_{1}}$
Table 3
Ranking results by utilizing the different k.
$EX({\tilde{s}_{1}})$ $EX({\tilde{s}_{2}})$ $EX({\tilde{s}_{3}})$ $EX({\tilde{s}_{4}})$ Ranking
$k=1$ 0.07485 0.07421 0.08379 0.08268 ${R_{3}}\succ {R_{4}}\succ {R_{1}}\succ {R_{2}}$
$k=2$ 0.07315 0.07316 0.08231 0.08200 ${R_{3}}\succ {R_{4}}\succ {R_{2}}\succ {R_{1}}$
$k=3$ 0.07180 0.07231 0.08108 0.08125 ${R_{4}}\succ {R_{3}}\succ {R_{2}}\succ {R_{1}}$
Theorem 1.
Let ${\tilde{s}_{1}},{\tilde{s}_{2}},\dots $ , and ${\tilde{s}_{n}}$ be NLNs, where ${\tilde{s}_{i}}={s_{{u_{i}}+{v_{i}}I}}$ $(i=1,2,\dots ,n)$, then the aggregated result of NLNs ${\tilde{s}_{1}},{\tilde{s}_{2}},\dots $ , and ${\tilde{s}_{n}}$ can be denoted as
(23)
\[\begin{array}{l}\displaystyle {\mathit{NLNMSM}^{(k)}}({\tilde{s}_{1}},{\tilde{s}_{2}},\dots ,{\tilde{s}_{n}})\\ {} \displaystyle \hspace{1em}={s_{{\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{k}}{u_{{i_{j}}}}}{{C_{n}^{k}}}\Big)^{\frac{1}{k}}}+\Big({\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{k}}({u_{{i_{j}}}}+{v_{{i_{j}}}})}{{C_{n}^{k}}}\Big)^{\frac{1}{k}}}-{\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{k}}{u_{{i_{j}}}}}{{C_{n}^{k}}}\Big)^{\frac{1}{k}}}\Big)I}}.\end{array}\]
Theorem 2.
Let ${\tilde{s}_{1}},{\tilde{s}_{2}},\dots $ , and ${\tilde{s}_{n}}$ be NLNs, where ${\tilde{s}_{i}}={s_{{u_{i}}+{v_{i}}I}}$, and $\omega ={({\omega _{1}},{\omega _{2}},\dots ,{\omega _{n}})^{T}}$ be the weight vector of ${\tilde{s}_{i}}$ satisfying ${\omega _{i}}\in [0,1]$ $(i=1,2,\dots ,n)$ and ${\textstyle\sum _{i=1}^{n}}{\omega _{i}}=1$. The aggregated result of NLNs ${\tilde{s}_{1}},{\tilde{s}_{2}},\dots $ , and ${\tilde{s}_{n}}$ obtained by the WNLNMSM operator in (31) is also a NLN, shown as follows:
(32)
\[\begin{array}{l}\displaystyle {\mathit{WNLNMSM}^{(k)}}({\tilde{s}_{1}},{\tilde{s}_{2}},\dots ,{\tilde{s}_{n}})\\ {} \displaystyle \hspace{2.5pt}=s\hspace{-0.1667em}{\hspace{-0.1667em}_{{\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{k}}{w_{{i_{j}}}}{u_{{i_{j}}}}}{{C_{n}^{k}}}\Big)^{\frac{1}{k}}}+\Big({\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{k}}({w_{{i_{j}}}}{u_{{i_{j}}}}+{w_{{i_{j}}}}{v_{{i_{j}}}})}{{C_{n}^{k}}}\Big)^{\frac{1}{k}}}-{\Big(\frac{\textstyle\sum \limits_{1\leqslant {i_{1}}<\cdots <{i_{k}}\leqslant n}{\textstyle\prod \limits_{j=1}^{k}}{w_{{i_{j}}}}{u_{{i_{j}}}}}{{C_{n}^{k}}}\Big)^{\frac{1}{k}}}\Big)I}}.\end{array}\]

INFORMATICA

  • Online ISSN: 1822-8844
  • Print ISSN: 0868-4952
  • Copyright © 2023 Vilnius University

About

  • About journal

For contributors

  • OA Policy
  • Submit your article
  • Instructions for Referees
    •  

    •  

Contact us

  • Institute of Data Science and Digital Technologies
  • Vilnius University

    Akademijos St. 4

    08412 Vilnius, Lithuania

    Phone: (+370 5) 2109 338

    E-mail: informatica@mii.vu.lt

    https://informatica.vu.lt/journal/INFORMATICA
Powered by PubliMill  •  Privacy policy