Informatica logo


Login Register

  1. Home
  2. Issues
  3. Volume 29, Issue 2 (2018)
  4. The PROMTHEE II Method Based on Probabil ...

Informatica

Information Submit your article For Referees Help ATTENTION!
  • Article info
  • Full article
  • Related articles
  • Cited by
  • More
    Article info Full article Related articles Cited by

The PROMTHEE II Method Based on Probabilistic Linguistic Information and Their Application to Decision Making
Volume 29, Issue 2 (2018), pp. 303–320
Peide Liu   Ying Li  

Authors

 
Placeholder
https://doi.org/10.15388/Informatica.2018.169
Pub. online: 1 January 2018      Type: Research Article      Open accessOpen Access

Received
1 October 2017
Accepted
1 March 2018
Published
1 January 2018

Abstract

The probabilistic linguistic terms set (PLTS) can reflect different importance degrees or weights of all possible linguistic terms (LTs) given by the experts for a specific object. The PROMETHEE II method is an important ranking method which can comprise preferences as well as indifferences, and it has a unique characteristic that can provide different types of preference functions. Based on the advantages of the PLTS and the PROMETHEE II method, in this paper, we extend the PROMETHEE II method to process the probabilistic linguistic information (PLI), and propose the PL-PROMETHEE II method with an improved possibility degree formula which can avoid the weaknesses from the original formula. Then concerning the multi-attribute decision making (MADM) problems with totally unknown weight information, the maximum deviation method is used to get the objective weight vector of the attributes, and net flows of the alternatives from the PROMETHEE II method are used to rank the alternatives. Finally, a numerical example is given to illustrate the feasibility of the proposed method.

1 Introduction

Due to the uncertainty and complexity of decision environment, as well as the ambiguity of human thinking, it is impossible to accurately describe the attribute values of the MADM problems by crisp numbers (Liu, 2017; Li et al., 2014; Liu and Su, 2012; Liu and Chen, 2017; Liu and Li, 2017; Liu et al., 2016b, 2017a, 2017b; Ye, 2017), however, it can be depicted more conveniently by linguistic terms (LTs). For example, when the risk of an investment object is evaluated, decision makers (DMs) are more likely to using “high”, “medium”, “low” and other similar LTs to express their assessing results (Dong et al., 2015; Wu et al., 2015), i.e. LTs are more consistent with people’s habits of thinking. In order to scientifically use the LTs, Zadeh (1975a, 1975b, 1975c) firstly proposed the linguistic variable (LV) which provided the foundation about the linguistic MADM (LMADM). Further, LVs are extended to some new types for the different fuzzy information, such as intuitionistic linguistic sets and intuitionistic linguistic numbers (Liu and Wang, 2017; Wang and Li, 2009), intuitionistic uncertain LV (IULV) (Liu and Jin, 2012), interval-valued IULV (Liu, 2013), 2-dimension uncertain LVs (Liu et al., 2016a; Liu and Teng, 2016), neutrosophic uncertain LVs (Liu and Shi, 2017; Liu and Tang, 2016) and so on.
Now, there are many linguistic models to process the linguistic information, such as LVs, the granular LTs (Cabrerizo et al., 2014) and unbalanced LTs (Cabrerizo et al., 2017). Obviously, these models can express the preferences or assess judgments of DMs only by one LT. However, in practice, the DMs may have some hesitations on several possible LTs. In order to deal with such situation, Rodriguez et al. (2012) proposed the hesitant fuzzy LTSs (HFLTSs). HFLTSs consist of some possible LTs provided by the DMs and all of these terms have of degrees or weights equal importance. However, the HFLTSs have caused some new thinking about whether all the LTs exactly have the same important degree, and if not, how to describe them. Based on this question, Pang et al. (2016) extended the HFLTSs to a more general concept, named as probabilistic LTSs (PLTSs). The PLTSs allow the DMs to provide more than one LT with probability which can express importance degrees or weights of all the possible evaluation values. In the PLTSs, LTs can be expressed by multi-granular (Cabrerizo et al., 2014) or unbalanced linguistic form Cabrerizo et al. (2017). Those improve the flexibility of the expression of linguistic information. So, compared to other expressed linguistic information, the PLTs are more suitable to solve the practical problems. At present, there are some studies about PLTs, for example, Gou and Xu (2016) proposed some novel operational laws about the PLTs, Bai et al. (2017) proposed the possibility degree formula for PLTSs, Pang et al. (2016) extended TOPSIS method to the PLTs, Zhang and Xing (2017) extended VIKOR method to the PLTs.
There are many traditional MADM methods, such as TOPSIS method (Pang et al., 2016), VIKOR method (Tan et al., 2016; Zhang et al., 2017), TODIM method (Gomes and Lima, 1991; Liu et al., 2017a; Wang and Liu, 2017), ELECTRE method (Greco et al., 2011; You et al., 2016) and PROMETHEE method (Brans and Vincke, 1985), and other well-known decision models like consensus model (Chiclana et al., 2013; Morente-Molinera et al., 2017) and Grey Additive Ratio Assessment Method (Turskis and Zavadskas, 2010). Each has its advantages. The TOPSIS method (Pang et al., 2016) determines a best solution with the shortest distance to the ideal solution and the farthest distance to the negative-ideal solution; the VIKOR method (Tan et al., 2016; Zhao et al., 2017) can give some compromise alternatives based on some conflicting criteria. TODIM method (Gomes and Lima, 1991; Liu et al., 2017b; Wang and Liu, 2017) can consider DMs’ bounded rationality, and the consensus model (Chiclana et al., 2013; Morente-Molinera et al., 2017) is a dynamic and iterative group discussion process. Now they can be extended to many linguistic environments (Li et al., 2017), such as multi-granular LV (Cabrerizo et al., 2014; Morente-Molinera et al., 2017), unbalanced LV (Cabrerizo et al., 2017), discrete LV (Massanet et al., 2014). The PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluations) method was first proposed by Brans and Vincke (1985) in 1980s, and its advantage is that it is simpler than other ranking methods and it can give a total ranking method which can comprise preferences as well as indifferences. In general, it includes PROMETHEE I method and PROMETHEE II method. The PROMETHEE I method gives a partial ranking of the alternatives, including possible incomparability, and the PROMETHEE II method provides a total ranking with the net flow. Obviously, PROMETHEE II method is better than PROMETHEE I because it includes no incomparability even when the comparison is difficult. Now, the PROMETHEE II method has been applied to many areas, for example, Senvar et al. (2014) applied the PROMETHEE II method to multiple criteria supplier selection, Milica and Milena (2016) applied the PROMETHEE II method to hotel energy performance comparison. In addition, the studies on PROMETHEE II method with the different preference function also gain great attention (Chen et al., 2011; Li et al., 2012), and Tan et al. (2016) extended the PROMETHEE II method to HFL environment. However, the existing PROMETHEE II method cannot process the PLTs.
The PLT’s possibility degree formula was firstly proposed by Bai et al. (2017), and the aim of the possibility degree formula is to rank the PLTs in a more suitable and accurate way. It solves some weaknesses that exist in other ranking methods. And to some extent, it can reduce the loss of the information. But in the practical experiments, we find this possibility degree formula also has some weaknesses. The first one is that when the PLTs only have one LT, this formula cannot calculate the correct answer. The second one is that when the two PLTSs have the same upper limit and lower limit (also have the same probability), this formula will ignore the data information. So, in this paper, motivated by Tan et al. (2016), we firstly proposed an improved PLT’s possibility degree formula which is applied to the PROMETHEE II method preference function, then the PROMETHEE II method is extended to HFL environment, and the preference function used the HFL’s possibility degree formula.
As discussed above, we can see the PLTSs allow DMs to express their preferences on some LTs with different probabilities, and PROMETHEE II method is simple and flexible total ranking method. Its flexibility is manifested in its use of different preference functions, and here, we proposes the improved possibility degree formula, and applied it to the preference function. In addition, in order to get objective weight of the attributes, we used the maximum deviation method to calculate the weights. Therefore, it is meaningful and necessary to extend the PROMETHEE II method to PLI environment. Motivated by this idea, the goal and contributions of this paper are (1) to extend PROMETHEE II method to PLI and propose PL-PROMETHEE II method; (2) to propose the improved possibility degree formula; (3) to develop a weight determination method based on the PL distance measures; (4) to show the feasibility and advantages of the proposed methods.
In order to achieve above goal, the remainder of this paper is set as follows. Section 2 gives some basic concepts of PLI and the PROMETHEE II method, and proposes the improved possibility degree formula. In Section 3, we extend the PROMETHEE II method to the PLI environment (PL-PROMETHEE II). In Section 4, we give an example to illustrate the effectiveness of proposed method. In Section 5, we give the conclusions and the direction of future studies.

2 Preliminaries

In this part, we introduce some concepts so as to easily understand this study for readers.

2.1 PLTS

Definition 1 (See Pang et al., 2016).
Let ${S_{1}}=\{{s_{\alpha }}\mid \alpha =-\tau ,\dots ,-1,0,1,\dots ,\tau \}$ be a LTS, a PLTS is defined as:
(1)
\[ \mathit{LS}(p)=\bigg\{{\mathit{LS}^{(k)}}({p^{(k)}})\hspace{0.1667em}\big|\hspace{0.1667em}{\mathit{LS}^{(k)}}\in {S_{1}},{p^{(k)}}\geqslant 0,\hspace{2.5pt}k=1,2,\dots ,\mathrm{\# }\mathit{LS}(p),{\sum \limits_{k=1}^{\mathrm{\# }\mathit{LS}(p)}}{p^{(k)}}\leqslant 1\bigg\},\]
where ${\mathit{LS}^{(k)}}({p^{(k)}})$ represents the LT ${\mathit{LS}^{(k)}}$ with the probability ${p^{(k)}}$, and $\mathrm{\# }\mathit{LS}(p)$ is the number of all different LTs in $\mathit{LS}(p)$.

2.2 Normalization of PLTS

Definition 2 (See Pang et al., 2016).
Given a PLTS $\mathit{LS}(p)$ with ${\textstyle\sum _{k=1}^{\mathrm{\# }\mathit{LS}(p)}}{p^{(k)}}<1$, the associated PLTS $\dot{L}S(p)=\{{\mathit{LS}^{(k)}}({\dot{p}^{(k)}})\mid k=1,2,\dots ,\mathrm{\# }\mathit{LS}(p)\}$ is called as normalized PLTS, where ${\dot{p}^{(k)}}={p^{(k)}}/{\textstyle\sum _{k=1}^{\mathrm{\# }\mathit{LS}(p)}}{p^{(k)}}$ for all $k=1,2,\dots ,\mathrm{\# }\mathit{LS}(p)$. Obviously, in PLTS $\dot{L}S(p)$, there is ${\textstyle\sum _{k=1}^{\mathrm{\# }\mathit{LS}(p)}}{\dot{p}^{(k)}}=1$.
Based on Definition 2, the probabilities of all LTs are normalized.
Definition 3 (See Pang et al., 2016).
For any two PLTSs ${\mathit{LS}_{1}}(p)=\{{\mathit{LS}_{1}^{(k)}}({p_{1}^{(k)}})\mid k=1,2,\dots ,\mathrm{\# }{\mathit{LS}_{1}}(p)\}$ and ${\mathit{LS}_{2}}(p)=\{{\mathit{LS}_{2}^{(k)}}({p_{2}^{(k)}})\mid k=1,2,\dots ,\mathrm{\# }{\mathit{LS}_{2}}(p)\}$, suppose $\mathrm{\# }{\mathit{LS}_{1}}(p)$ and $\mathrm{\# }{\mathit{LS}_{2}}(p)$ are the numbers of LTs in ${\mathit{LS}_{1}}(p)$ and ${\mathit{LS}_{2}}(p)$ respectively. If $\mathrm{\# }{\mathit{LS}_{1}}(p)>\mathrm{\# }{\mathit{LS}_{2}}(p)$, then $\mathrm{\# }{\mathit{LS}_{1}}(p)-\mathrm{\# }{\mathit{LS}_{2}}(p)$ LTs will be added to ${\mathit{LS}_{2}}(p)$ so that the numbers of their LTs are equal. The added LTs are the smallest ones in ${\mathit{LS}_{2}}(p)$, and their probabilities are zero. About the detailed information, please refer to reference (Pang et al., 2016).
By Definitions 2 and 3, the normalized PLTS (NPLTS) are obtained, which are denoted as ${\mathit{LS}^{N}}(p)=\{{\mathit{LS}^{N(k)}}({p^{N(k)}})\mid k=1,2,\dots ,\mathrm{\# }\mathit{LS}(p)\}$, where ${p^{N(k)}}={p^{(k)}}/{\textstyle\sum _{k=1}^{\mathrm{\# }\mathit{LS}(p)}}{p^{(k)}}$ for all $k=1,2,\dots ,\mathrm{\# }\mathit{LS}(p)$.

2.3 Comparison Between PLTSs

To compare the PLTSs, Pang et al. (2016) presented the concept of the score of PLTSs:
Definition 4 (See Pang et al., 2016).
Suppose $\mathit{LS}(p)=\{{\mathit{LS}^{(k)}}({p^{(k)}})\mid k=1,2,\dots ,\mathrm{\# }\mathit{LS}(p)\}$ is a PLTS, and ${r^{(k)}}$ is the subscript of the LT ${\mathit{LS}^{(k)}}$. Then the score function $E(\mathit{LS}(p))$ of $\mathit{LS}(p)$ is given by
(2)
\[ E\big(\mathit{LS}(p)\big)={s_{\bar{\alpha }}},\hspace{1em}\text{where}\hspace{2.5pt}\bar{\alpha }={\sum \limits_{k=1}^{\mathrm{\# }\mathit{LS}(p)}}\big({r^{(k)}}{p^{(k)}}\big)\Big/{\sum \limits_{k=1}^{\mathrm{\# }\mathit{LS}(p)}}{p^{(k)}}.\]
Obviously, the score function represents the averaging value of all LTs of a PLTS. Generally, for a given PLTS $\mathit{LS}(p)$, $E(\mathit{LS}(p))$ is an extended LT.
Based on the score function of the PLTS, we define the following relationship between two PLTSs:
Definition 5 (See Pang et al., 2016).
For any two given PLTSs ${\mathit{LS}_{1}}(p)$ and ${\mathit{LS}_{2}}(p)$, if $E({\mathit{LS}_{1}}(p))>E({\mathit{LS}_{2}}(p))$, then the PLTS ${\mathit{LS}_{1}}(p)$ is greater than ${\mathit{LS}_{2}}(p)$.
However, when $E({\mathit{LS}_{1}}(p))=E({\mathit{LS}_{2}}(p))$, we cannot compare the PLTSs ${\mathit{LS}_{1}}(p)$ and ${\mathit{LS}_{2}}(p)$. Further, in order to process this situation, we can give the following definition.
Definition 6 (See Pang et al., 2016).
For a given PLTs $\mathit{LS}(p)=\{{\mathit{LS}^{(k)}}({p^{(k)}})\mid k=1,2,\dots ,\mathrm{\# }\mathit{LS}(p)\}$, suppose ${r^{(k)}}$ is the subscript of LT ${\mathit{LS}^{(k)}}$, and $E(\mathit{LS}(p))={s_{\bar{\alpha }}}$, where $\bar{\alpha }={\textstyle\sum _{k=1}^{\mathrm{\# }\mathit{LS}(p)}}({r^{(k)}}{p^{(k)}})/{\textstyle\sum _{k=1}^{\mathrm{\# }\mathit{LS}(p)}}{p^{(k)}}$. Then the deviation degree of $\mathit{LS}(p)$ is:
(3)
\[ \sigma {\big(\mathit{LS}(p)\big)=\bigg({\sum \limits_{k=1}^{\mathrm{\# }\mathit{LS}(p)}}\big({p^{(k)}}{\big({r^{(k)}}-\bar{\alpha }\big)\big)^{2}}\bigg)^{\frac{1}{2}}}\Big/{\sum \limits_{k=1}^{\mathrm{\# }\mathit{LS}(p)}}{p^{(k)}}.\]
For any two PLTSs ${\mathit{LS}_{1}}(p)$ and ${\mathit{LS}_{2}}(p)$ with $E({\mathit{LS}_{1}}(p))=E({\mathit{LS}_{2}}(p))$, if $\bar{\sigma }({\mathit{LS}_{1}}(p))>\bar{\sigma }({\mathit{LS}_{2}}(p))$, then ${\mathit{LS}_{1}}(p)<{\mathit{LS}_{2}}(p)$; if $\bar{\sigma }({\mathit{LS}_{1}}(p))=\bar{\sigma }({\mathit{LS}_{2}}(p))$, then ${\mathit{LS}_{1}}(p)$ is indifferent to ${\mathit{LS}_{2}}(p)$, denoted by ${\mathit{LS}_{1}}(p)\approx {\mathit{LS}_{2}}(p)$.
Definition 7 (See Pang et al., 2016).
For any two given PLTSs ${\mathit{LS}_{1}}(p)$ and ${\mathit{LS}_{2}}(p)$, suppose $\tilde{L}{S_{1}}(p)$ and $\tilde{L}{S_{2}}(p)$ are the corresponding normalized PLTSs respectively. Then
  • (1) If $E(\tilde{L}{S_{1}}(p))>E(\tilde{L}{S_{2}}(p))$, then ${\mathit{LS}_{1}}(p)>{\mathit{LS}_{2}}(p)$.
  • (2) If $E(\tilde{L}{S_{1}}(p))<E(\tilde{L}{S_{2}}(p))$, then ${\mathit{LS}_{1}}(p)<{\mathit{LS}_{2}}(p)$.
  • (3) If $E(\tilde{L}{S_{1}}(p))=E(\tilde{L}{S_{2}}(p))$, then
    • (i) If $\bar{\sigma }(\tilde{L}{S_{1}}(p))>\bar{\sigma }(\tilde{L}{S_{2}}(p))$, then ${\mathit{LS}_{1}}(p)<{\mathit{LS}_{2}}(p)$.
    • (ii) If $\bar{\sigma }(\tilde{L}{S_{1}}(p))=\bar{\sigma }(\tilde{L}{S_{2}}(p))$, then ${\mathit{LS}_{1}}(p)\approx {\mathit{LS}_{2}}(p)$.
    • (iii) If $\bar{\sigma }(\tilde{L}{S_{1}}(p))<\bar{\sigma }(\tilde{L}{S_{2}}(p))$, then ${\mathit{LS}_{1}}(p)>{\mathit{LS}_{2}}(p)$.

2.4 Ranking PLTSs by Possibility Degree

The PLTS’s possibility degree formula was firstly proposed by Bai et al. (2017), and its aim is to rank the PLTs in a more suitable and accurate way. It can solve some existing weaknesses of the other ranking methods. To some extent, it can reduce the loss of the information.
Definition 8 (See Bai et al., 2017).
Let $\mathit{LS}(p)=\{{\mathit{LS}^{(k)}}({p^{(k)}})\mid k=1,2,\dots ,\mathrm{\# }\mathit{LS}(p)\}$ is a PLTS, and ${r^{(k)}}$ is the subscript of the LT ${\mathit{LS}^{(k)}}$. Let ${\mathit{LS}^{-}}=\min ({r^{(k)}})$ and ${\mathit{LS}^{+}}=\max ({r^{(k)}})$ be the lower bound and the upper bound of $\mathit{LS}(p)$, respectively. The $a{(LS)^{-}}$ is the lower area and the $a{(LS)^{+}}$ is the upper area, where $a{(LS)^{-}}=\min ({r^{(k)}})\times {p^{(k)}}$ and $a{(LS)^{+}}=\max ({r^{(k)}})\times {p^{(k)}}$.
Definition 9 (See Bai et al., 2017).
Let $S=\{{s_{\alpha }}\mid \alpha =-\tau ,\dots ,-1,0,1,\dots ,\tau \}$ be an LTS, ${\mathit{LS}_{1}}(p)$ and ${\mathit{LS}_{2}}(p)$ be two PLTSs. The possibility degree of ${\mathit{LS}_{1}}(p)$ being not less than ${\mathit{LS}_{2}}(p)$ is defined as
(4)
\[\begin{array}{l}\displaystyle p({\mathit{LS}_{1}}(p)\geqslant {\mathit{LS}_{2}}(p))\\ {} \displaystyle \hspace{1em}=0.5\times \bigg(1+\frac{(a{({\mathit{LS}_{1}})^{-}}-a{({\mathit{LS}_{2}})^{-}})+(a{({\mathit{LS}_{1}})^{+}}-a{({\mathit{LS}_{2}})^{+}})}{|a{({\mathit{LS}_{1}})^{-}}-a{({\mathit{LS}_{2}})^{-}}|+|a{({\mathit{LS}_{1}})^{+}}-a{({\mathit{LS}_{2}})^{+}}|+a({\mathit{LS}_{1}}\cap {\mathit{LS}_{2}})}\bigg)\end{array}\]
where $a({\mathit{LS}_{1}}\cap {\mathit{LS}_{2}})$ represent the area of the intersection between ${\mathit{LS}_{1}}(p)$ and ${\mathit{LS}_{2}}(p)$.
Definition 10 (See Bai et al., 2017).
If $p({\mathit{LS}_{1}}(p)>{\mathit{LS}_{2}}(p))>p({\mathit{LS}_{2}}(p)>{\mathit{LS}_{1}}(p))$, then ${\mathit{LS}_{1}}(p)$ is superior to ${\mathit{LS}_{2}}(p)$ with the degree of $p({\mathit{LS}_{1}}(p)>{\mathit{LS}_{2}}(p))$, denoted by ${\mathit{LS}_{1}}(p){\succ ^{p({\mathit{LS}_{1}}(p)>{\mathit{LS}_{2}}(p))}}{\mathit{LS}_{2}}(p)$; if $p({\mathit{LS}_{1}}(p)>{\mathit{LS}_{2}}(p))=1$, then ${\mathit{LS}_{1}}(p)$ is absolutely superior to ${\mathit{LS}_{2}}(p)$; if $p({\mathit{LS}_{1}}(p)>{\mathit{LS}_{2}}(p))=0.5$, then ${\mathit{LS}_{1}}(p)$ is indifferent to ${\mathit{LS}_{2}}(p)$, denoted by ${\mathit{LS}_{1}}(p)={\mathit{LS}_{2}}(p)$.
But in the practical experiments, we find this possibility degree formula also has some weaknesses.
The first one is that when the PLTSs only have one LT, this formula cannot give the correct result. For example, we have two PLTSs ${\mathit{LS}_{1}}(p)=\{{s_{2}}(0.3),{s_{3}}(0.7)\}$ and ${\mathit{LS}_{2}}(p)=\{{s_{1}}(1)\}$, and we can see the ${\mathit{LS}_{1}}(p)$ is absolutely superior to the ${\mathit{LS}_{2}}(p)$, so we can get $p({\mathit{LS}_{1}}(p)>{\mathit{LS}_{2}}(p))=1$. However, according to Eq. (4), we have
\[ p\big({\mathit{LS}_{1}}(p)\geqslant {\mathit{LS}_{2}}(p)\big)=0.5\times \bigg(1+\frac{(0.6-1)+(2.1-1)}{|0.6-1|+|2.1-1|+0}\bigg)=\frac{11}{15}.\]
Obviously, this result is counterintuitive.
The second one is that when the two PLTSs have the same upper limit and lower limit (also have the same probability), this formula will lose the data information. For example, we have two PLTs ${\mathit{LS}_{3}}(p)=\{{s_{-1}}(0.3),{s_{2}}(0,3),{s_{3}}(0.4)\}$ and ${\mathit{LS}_{4}}(p)=\{{s_{-1}}(0.3),{s_{0}}(0.3),{s_{3}}(0.4)\}$, according to Eq. (5), we can get the possibility degree $p({\mathit{LS}_{1}}(p)>{\mathit{LS}_{2}}(p))=0.5$, but from intuition, we can get the ${\mathit{LS}_{3}}(p)$ is superior to ${\mathit{LS}_{4}}(p)$ because the medium elements of PLTSs are different, the ranking results should also be different.
Based on the above two aspects, we developed the improved possibility degree formula to overcome these defects.
Firstly, when the two PLTs have no common LTs, we divided this into two conditions.
Condition 1.
All the probabilistic linguistic elements (PLEs) and their subscripts in the PLTs are smaller (bigger) than the other one.
Under this condition, we can directly judge the possibility degree of PLTs, usually it has two conditions: if all the PLEs and their subscripts in ${\mathit{LS}_{1}}(p)$ are bigger than ${\mathit{LS}_{2}}(p)$, then $p({\mathit{LS}_{1}}(p)>{\mathit{LS}_{2}}(p))=1$; if all the PLEs linguistic terms subscript in ${\mathit{LS}_{1}}(p)$ are smaller than ${\mathit{LS}_{2}}(p)$, then $p({\mathit{LS}_{1}}(p)>{\mathit{LS}_{2}}(p))=0$.
Condition 2.
When the two PLTs have common PLEs, we can use the following formula to calculate the possibility degree:
(5)
\[\begin{array}{l}\displaystyle p\big({\mathit{LS}_{1}}(p)\geqslant {\mathit{LS}_{2}}(p)\big)\\ {} \displaystyle \hspace{1em}=0.5\times \Bigg(1+\frac{{\textstyle\textstyle\sum _{k=1}^{\mathrm{\# }{\mathit{LS}_{1}}}}(a{({\mathit{LS}_{1}})^{(k)}}-a{({\mathit{LS}_{2}})^{(k)}})}{{\textstyle\textstyle\sum _{k=1}^{\mathrm{\# }{\mathit{LS}_{1}}}}|(a{({\mathit{LS}_{1}})^{(k)}}-a{({\mathit{LS}_{2}})^{(k)}})|+a({\mathit{LS}_{1}}\cap {\mathit{LS}_{2}})}\Bigg).\end{array}\]
In order to make sure all the PLTs have the same number of PLEs, we should normalize the PLTs. In the above formula, the $\mathrm{\# }{\mathit{LS}_{1}}$ and $\mathrm{\# }{\mathit{LS}_{2}}$ are the numbers of all LTs in ${\mathit{LS}_{1}}(p)$ and ${\mathit{LS}_{2}}(p)$, and the $\mathrm{\# }{\mathit{LS}_{1}}$ and $\mathrm{\# }{\mathit{LS}_{2}}$ are equal.

2.5 PROMETHEE II Method

The PROMETHEE method is a multicriteria analysis method that was proposed by Brans and Vincke (1985) in 1985, including the PROMETHEE I and the PROMETHEE II.
In the PROMETHEE I method, the solution set is sorted by positive flow and negative flow, and it only obtains a partial ranking of the solution set. In PROMETHEE II method, it obtains a complete ranking by a net flow.
Let $M=\{1,2,\dots ,m\}$ and $N=\{1,2,\dots ,n\}$, and suppose the decision matrix $X={[{x_{ij}}]_{m\times n}}$, where ${x_{ij}}$ is the j-th attribute value with respect to the i-th alternative, and then normalize $X={[{x_{ij}}]_{m\times n}}$ into $\tilde{X}={[{\tilde{x}_{ij}}]_{m\times n}}$, ${x_{ij}}$ and ${\tilde{x}_{ij}}$ are all crisp numbers, $i\in M$, $j\in N$.
The procedures of the PROMETHEE II method are showed as follows:
  • Step 1. Determine the weight ${w_{j}}$ of the attribute ${C_{j}}$.
  • Step 2. Utilize a preference function ${p_{j}}({x_{ij}})$ for each attribute ${C_{j}}$. (Select the preference function according to the actual problem.)
  • Step 3. Calculate the multicriterion preference index of the alternative ${x_{i}}$ over the alternative ${x_{k}}$ $(k=1,\dots ,m)$ by using the following expression:
    (6)
    \[ \prod ({x_{i}},{x_{k}})={\sum \limits_{j=1}^{n}}{\omega _{j}}{P_{j}}({\tilde{x}_{ij}},{\tilde{x}_{kj}}).\]
  • Step 4. Calculate the positive flow and negative flow of each alternatives:
    (7)
    \[ {\varphi ^{+}}({x_{i}})={\sum \limits_{k=1}^{m}}\prod ({x_{i}},{x_{k}})={\sum \limits_{k=1}^{m}}{\sum \limits_{j=1}^{n}}{\omega _{j}}{P_{j}}({\tilde{x}_{ij}},{\tilde{x}_{kj}}),\]
    (8)
    \[ {\varphi ^{-}}({x_{i}})={\sum \limits_{k=1}^{m}}\prod ({x_{k}},{x_{i}})={\sum \limits_{k=1}^{m}}{\sum \limits_{j=1}^{n}}{\omega _{j}}{P_{j}}({\tilde{x}_{kj}},{\tilde{x}_{ij}}).\]
  • Step 5. Calculate the net flow of the alternative:
    (9)
    \[ \varphi ({x_{i}})={\varphi ^{+}}({x_{i}})-{\varphi ^{-}}({x_{i}}).\]
  • Step 6. According to the value of $\varphi ({x_{i}})$, rank all the alternatives.
    \[\begin{array}{l}\displaystyle \text{If}\hspace{2.5pt}\varphi ({x_{i}})>\varphi ({x_{k}}),\hspace{1em}\text{then}\hspace{2.5pt}{x_{i}}\succ {x_{k}},\\ {} \displaystyle \text{If}\hspace{2.5pt}\varphi ({x_{i}})=\varphi ({x_{k}}),\hspace{1em}\text{then}\hspace{2.5pt}{x_{i}}\approx {x_{k}}.\end{array}\]

3 PL-PROMETHEE II Method

3.1 Determining Weights Based on the Maximum Deviation Method

In the MADM problem, the weight reflects the importance of each attribute. There are many methods which can determine attribute weights, such as expert opinion survey method, AHP method, and so on, but these methods have subjective factors in determining the attribute weights. In order to avoid the influence of subjective factors, we use the maximum deviation method to calculate the object weights of the attributes.
We use the distance to represent the deviation in the maximum deviation method, and Lin and Xu (2017) provided a series of probabilistic linguistic (PL) distance measure, here we used the normalized Hamming distance measure. Based on this measure, we form a systematic weight calculation method.
The steps are shown as follows:
  • (1) Normalize the PL decision matrix $R={[\mathit{LS}{(p)_{ij}^{}}]_{m\times n}}$;
  • (2) According to the PL distance measure, calculate the distance of $\mathit{LS}{(p)_{ij}^{}}$.
(10)
\[ d\big({\mathit{LS}_{1}^{(k1)}}\big({p_{1}^{k1}}\big),{\mathit{LS}_{2}^{(k2)}}\big({p_{2}^{k2}}\big)\big)=\bigg|{p_{1}^{k1}}\times \frac{I({\mathit{LS}_{1}^{(k1)}})}{\tau }-{p_{2}^{k2}}\times \frac{I({\mathit{LS}_{2}^{(k2)}})}{\tau }\bigg|,\]
(11)
\[ {d_{nhd}}\big({\mathit{LS}_{1}}(p),{\mathit{LS}_{2}^{}}(p)\big)=\frac{1}{\mathrm{\# }{\mathit{LS}_{1}^{}}(p)}{\sum \limits_{k=1}^{\mathrm{\# }{L_{1}}(p)}}d\big({\mathit{LS}_{1}^{(k1)}}\big({p_{1}^{k1}}\big),{\mathit{LS}_{2}^{(k2)}}\big({p_{2}^{k2}}\big)\big),\]
where ${\mathit{LS}_{1}^{(k1)}}({p_{1}^{k1}})\in {\mathit{LS}_{1}}(p)$ and ${\mathit{LS}_{2}^{(k2)}}({p_{2}^{k2}})\in {\mathit{LS}_{2}}(p)$ are two PLTEs, $I({\mathit{LS}_{1}^{(k1)}})$ and $I({\mathit{LS}_{2}^{(k2)}})$ are the subscripts of the linguistic terms ${\mathit{LS}_{1}^{(k1)}}$ and ${\mathit{LS}_{2}^{(k2)}}$.
Finally, the weight of each attribute can be gotten as following:
(12)
\[ {w_{j}}=\frac{{\textstyle\textstyle\sum _{i=1}^{m}}{\textstyle\textstyle\sum _{l=1}^{m}}d(\mathit{LS}{(p)_{ij}},\mathit{LS}{(p)_{lj}})}{{\textstyle\textstyle\sum _{j=1}^{n}}{\textstyle\textstyle\sum _{i=1}^{m}}{\textstyle\textstyle\sum _{l=1}^{m}}d(\mathit{LS}{(p)_{ij}},\mathit{LS}{(p)_{lj}})}.\]

3.2 A Decision Making Method Based on the PL-PROMETHEE II Method

For a MADM problem with PLI, let $A=\{{A_{1}},{A_{2}},\dots ,{A_{m}}\}$ be a finite set of alternatives, $C=\{{C_{1}},{C_{2}},\dots ,{C_{n}}\}$ be the set of attributes and $\omega ={({\omega _{1}},{\omega _{2}},\dots ,{\omega _{n}})^{T}}$ be the weight vector of attributes ${C_{j}}$ $(j=1,2,\dots ,n)$, with ${\omega _{j}}\in [0,1]$, $j=1,2,\dots ,n$ and ${\textstyle\sum _{j=1}^{n}}{\omega _{j}}=1$. Suppose that $R={[\mathit{LS}{(p)_{ij}}]_{m\times n}}$ is the decision matrix, where $\mathit{LS}{(p)_{ij}}=\{{\mathit{LS}_{ij}^{(t)}}({p_{ij}^{(t)}})\mid t=1,2,\dots ,\mathrm{\# }\mathit{LS}{(p)_{ij}}\}$ is a PLTS, which is an evaluation value of alternative ${A_{i}}$ about attribute ${C_{j}}$. Then the goal is to rank the alternatives.
Step 1. Normalize the attribute values.
In real decision making, the attribute values have two types, i.e. cost type and benefit type. In order to eliminate the difference in types, we need to convert them to the same type.
We can convert the cost type to the benefit type, and the transformed decision matrix is expressed by $R={[\mathit{LS}{(p)_{ij}}]_{m\times n}}$, where
(13)
\[ \mathit{LS}{(p)_{ij}}=\left\{\begin{array}{l@{\hskip4.0pt}l}\{{\mathit{LS}_{ij}^{(t)}}({p_{ij}^{(t)}})\mid t=1,2,\dots ,\mathrm{\# }\mathit{LS}{(p)_{ij}}\}\hspace{1em}& \text{for benefit attribute}\hspace{2.5pt}{C_{j}},\\ {} \{-{\mathit{LS}_{ij}^{(t)}}({p_{ij}^{(t)}})\mid t=1,2,\dots ,\mathrm{\# }\mathit{LS}{(p)_{ij}}\}\hspace{1em}& \text{for cost attribute}\hspace{2.5pt}{C_{j}}.\end{array}\right.\]
Then, according to the Definitions 2 and 3, to normalize the PL decision matrix.
Step 2. Determine the weight ${w_{j}}$ of the attribute ${C_{j}}$ with the maximum deviation method by formulas (10)–(12).
Step 3. Calculate the multicriterion preference index of the alternative ${A_{i}}$ over the alternative ${A_{k}}$ $(k=1,\dots ,m)$ by the following expression:
(14)
\[ \prod ({A_{i}},{A_{k}})={\sum \limits_{j=1}^{n}}{\omega _{j}}{P_{j}}\big(\mathit{LS}{(p)_{ij}},\mathit{LS}{(p)_{kj}}\big).\]
Step 4. Calculate the positive flow and negative flow of each alternative:
(15)
\[ {\varphi ^{+}}({A_{i}})=\frac{1}{m}{\sum \limits_{k=1}^{m}}\prod ({A_{i}},{A_{k}})=\frac{1}{m}{\sum \limits_{k=1}^{m}}{\sum \limits_{j=1}^{n}}{\omega _{j}}{P_{j}}\big(\mathit{LS}{(p)_{ij}},\mathit{LS}{(p)_{kj}}\big),\]
(16)
\[ {\varphi ^{-}}({A_{i}})=\frac{1}{m}{\sum \limits_{k=1}^{m}}\prod ({A_{k}},{A_{i}})=\frac{1}{m}{\sum \limits_{k=1}^{m}}{\sum \limits_{j=1}^{n}}{\omega _{j}}{P_{j}}\big(\mathit{LS}{(p)_{kj}},\mathit{LS}{(p)_{ij}}\big).\]
Step 5. Calculate the net flow of each alternative:
(17)
\[ \varphi ({A_{i}})={\varphi ^{+}}({A_{i}})-{\varphi ^{-}}({A_{i}}).\]
Step 6. Rank ${\mathit{LS}_{i}}$ $(i=1,2,\dots ,m)$ according to the value of $\varphi ({A_{i}})$.
Step 7. End.

4 Numerical Example

Example 1.
Because of limited medical resources and the increasingly serious environmental pollution in China, it is necessary to evaluate large domestic hospitals so as to search for the optimal one with appropriate resource allocation and the reasonable resource input and output. There are three attributes which are adopted: the hospital environmental status (${C_{1}}$); personalized diagnosis and treatment optimization (${C_{2}}$); and social resource allocation optimization (${C_{3}}$).The experts compare each pair of hospitals using the LTS $S=\{{s_{-3}},{s_{-2}},{s_{-1}},{s_{0}},{s_{1}},{s_{2}},{s_{3}}\}$. The weight vector of these attributes is unknown and it is determined by the maximum deviation method. There are four hospitals, which are the West China Hospital of Sichuan University (${A_{1}}$), the Huashan Hospital of Fudan University (${A_{2}}$), the Union Medical College Hospital (${A_{3}}$) and the Chinese PLA General Hospital $({A_{4}})$, to be evaluated and the decision matrix is shown in Table 1. The goal is to select the best one.
According to the Definition 2 and Definition 3, we can standardize the data in Table 1 into Table 2. For example, the $\{{s_{0}}(0.4),{s_{1}}(0.6)\}$ only have two PLEs, and it needs to be normalized to four PLEs, so it will add two ${s_{0}}(0)$. The normalized data is shown in Table 2.
Table 1
The decision matrix with PLTs for Example 1.
${C_{1}}$ ${C_{2}}$ ${C_{3}}$
${A_{1}}$ $\{{s_{0}}(0.4),{s_{1}}(0.6)\}$ $\{{s_{2}}(1)\}$ $\{{s_{-1}}(0.2),{s_{0}}(0.8)\}$
${A_{2}}$ $\{{s_{2}}(0.3),{s_{3}}(0.7)\}$ $\{{s_{0}}(1)\}$ $\{{s_{1}}(0.2),{s_{2}}(0.4),{s_{3}}(0.4)\}$
${A_{3}}$ $\{{s_{1}}(1)\}$ $\{{s_{1}}(0.5),{s_{2}}(0.5)\}$ $\{{s_{2}}(0.6),{s_{3}}(0.4)\}$
${A_{4}}$ $\{{s_{2}}(0.5),{s_{3}}(0.5)\}$ $\{{s_{-2}}(0.4),{s_{-1}}(0.1),{s_{0}}(0.2),{s_{1}}(0.3)\}$ $\{{s_{1}}(1)\}$
Table 2
The normalized decision matrix with PLTs for Example 1.
${C_{1}}$ ${C_{2}}$ ${C_{3}}$
${A_{1}}$ $\{{s_{0}}(0),{s_{0}}(0),{s_{0}}(0.4),{s_{1}}(0.6)\}$ $\{{s_{2}}(0),{s_{2}}(0),{s_{2}}(0),{s_{2}}(1)\}$ $\{{s_{-1}}(0),{s_{-1}}(0),{s_{-1}}(0.2),{s_{0}}(0.8)\}$
${A_{2}}$ $\{{s_{2}}(0),{s_{2}}(0),{s_{2}}(0.3),{s_{3}}(0.7)\}$ $\{{s_{0}}(0),{s_{0}}(0),{s_{0}}(0),{s_{0}}(1)\}$ $\{{s_{1}}(0),{s_{1}}(0.2),{s_{2}}(0.4),{s_{3}}(0.4)\}$
${A_{3}}$ $\{{s_{1}}(0),{s_{1}}(0),{s_{1}}(0),{s_{1}}(1)\}$ $\{{s_{1}}(0),{s_{1}}(0),{s_{1}}(0.5),{s_{2}}(0.5)\}$ $\{{s_{2}}(0),{s_{2}}(0),{s_{2}}(0.6),{s_{3}}(0.4)\}$
${A_{4}}$ $\{{s_{2}}(0),{s_{2}}(0),{s_{2}}(0.5),{s_{3}}(0.5)\}$ $\{{s_{-2}}(0.4),{s_{-1}}(0.1),{s_{0}}(0.2),{s_{1}}(0.3)\}$ $\{{s_{1}}(0),{s_{1}}(0),{s_{1}}(0),{s_{1}}(1)\}$

4.1 The Decision Steps

Step 1: Because all attributes are benefit type, it is only needed to use the Definitions 2 and 3 to normalize the attribute values.
Step 2: Calculate the attribute weights by formula (12), we can get
\[\begin{array}{l}\displaystyle d\big(\mathit{LS}{(p)_{i1}},\mathit{LS}{(p)_{k1}}\big)=\left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c}0\hspace{1em}& 0.175\hspace{1em}& 0.033\hspace{1em}& 0.158\\ {} 0.175\hspace{1em}& 0\hspace{1em}& 0.142\hspace{1em}& 0.083\\ {} 0.033\hspace{1em}& 0.142\hspace{1em}& 0\hspace{1em}& 0.125\\ {} 0.158\hspace{1em}& 0.083\hspace{1em}& 0.125\hspace{1em}& 0\end{array}\right],\\ {} \displaystyle d\big(\mathit{LS}{(p)_{i2}},\mathit{LS}{(p)_{k2}}\big)=\left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c}0\hspace{1em}& 0.167\hspace{1em}& 0.125\hspace{1em}& 0.217\\ {} 0.167\hspace{1em}& 0\hspace{1em}& 0.125\hspace{1em}& 0.100\\ {} 0.125\hspace{1em}& 0.125\hspace{1em}& 0\hspace{1em}& 0.175\\ {} 0.217\hspace{1em}& 0.100\hspace{1em}& 0.175\hspace{1em}& 0\end{array}\right],\\ {} \displaystyle d\big(\mathit{LS}{(p)_{i3}},\mathit{LS}{(p)_{k3}}\big)=\left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c}0\hspace{1em}& 0.200\hspace{1em}& 0.217\hspace{1em}& 0.100\\ {} 0.200\hspace{1em}& 0\hspace{1em}& 0.050\hspace{1em}& 0.100\\ {} 0.217\hspace{1em}& 0.050\hspace{1em}& 0\hspace{1em}& 0.117\\ {} 0.100\hspace{1em}& 0.100\hspace{1em}& 0.117\hspace{1em}& 0\end{array}\right].\end{array}\]
Then
\[ w={(0.298,0.377,0.325)^{T}}.\]
Step 3: Utilize the improved possibility degree formula for each ${C_{j}}$ based on formula (5), we have
\[\begin{array}{l}\displaystyle p\big(\mathit{LS}{(p)_{i1}},\mathit{LS}{(p)_{k1}}\big)=\left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c}0.500\hspace{1em}& 0\hspace{1em}& 0.115\hspace{1em}& 0\\ {} 1\hspace{1em}& 0.500\hspace{1em}& 1\hspace{1em}& 0.556\\ {} 0.885\hspace{1em}& 0\hspace{1em}& 0.500\hspace{1em}& 0\\ {} 1\hspace{1em}& 0.444\hspace{1em}& 1\hspace{1em}& 0.500\end{array}\right],\\ {} \displaystyle p\big(\mathit{LS}{(p)_{i2}},\mathit{LS}{(p)_{k2}}\big)=\left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c}0.500\hspace{1em}& 1\hspace{1em}& 0.917\hspace{1em}& 1\\ {} 0\hspace{1em}& 0.500\hspace{1em}& 0\hspace{1em}& 0.944\\ {} 0.083\hspace{1em}& 1\hspace{1em}& 0.500\hspace{1em}& 0.944\\ {} 0\hspace{1em}& 0.056\hspace{1em}& 0.056\hspace{1em}& 0.500\end{array}\right],\\ {} \displaystyle p\big(\mathit{LS}{(p)_{i3}},\mathit{LS}{(p)_{k3}}\big)=\left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c}0.500\hspace{1em}& 0\hspace{1em}& 0\hspace{1em}& 0\\ {} 1\hspace{1em}& 0.500\hspace{1em}& 0.429\hspace{1em}& 0.929\\ {} 1\hspace{1em}& 0.571\hspace{1em}& 0.500\hspace{1em}& 1\\ {} 1\hspace{1em}& 0.071\hspace{1em}& 0\hspace{1em}& 0.500\end{array}\right].\end{array}\]
Step 4: Calculate the multi-criterion preference index of the alternative ${A_{i}}$ over the alternative ${A_{k}}$ $(k=1,\dots ,m)$, and we obtain
\[ \prod ({A_{i}},{A_{k}})=\left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c}0.500\hspace{1em}& 0.100\hspace{1em}& 0.115\hspace{1em}& 0.100\\ {} 0.900\hspace{1em}& 0.500\hspace{1em}& 0.500\hspace{1em}& 0.856\\ {} 0.885\hspace{1em}& 0.500\hspace{1em}& 0.500\hspace{1em}& 0.794\\ {} 0.900\hspace{1em}& 0.144\hspace{1em}& 0.206\hspace{1em}& 0.500\end{array}\right].\]
Step 5: Calculate the positive flow and negative flow of each alternatives: ${A_{i}}$ $(i=1,2,3,4)$ based on formula (15)–(16), we have
\[\begin{array}{l}\displaystyle {\varphi ^{+}}({A_{i}})=(0.815,2.756,2.68,1.75),\\ {} \displaystyle {\varphi ^{-}}({A_{i}})=(3.185,1.244,1.32,2.25).\end{array}\]
Step 6: Calculate net flow of each ${A_{i}}$ $(i=1,2,3,4)$ based on formula (17), we have
\[ \varphi ({A_{i}})=(-2.37,1.511,1.359,-0.5).\]
Step 4: Rank the alternatives.
According to the value of $\varphi ({A_{i}})$, the ranking is ${A_{2}}\succ {A_{3}}\succ {A_{4}}\succ {A_{1}}$.

4.2 Discussion

In this part, we discuss the effectiveness and advantages of our proposed method.
First of all, we will verify the effectiveness of the methods. The weights of our proposed method $w={(0.298,0.377,0.325)^{T}}$ are calculated by the method of maximizing deviations, but the method proposed by Gou and Xu (2016) used the given weights $\omega ={(0.2,0.1,0.7)^{T}}$. In order to eliminate this difference and make the contrast clearer, our proposed method will also use the given weights $\omega ={(0.2,0.1,0.7)^{T}}$ to re-calculate this example, and we also re-calculate the method of Gou and Xu (2016), it is used for comparison with our proposed method with $w={(0.298,0.377,0.325)^{T}}$.
Different weights with the different methods and the comparison results can be seen as follows.
From Table 3, we can find that (1) under the given weights $\omega ={(0.2,0.1,0.7)^{T}}$, the PL-PROMETHEE II method and the PL-TOPSIS method have the same ranking results; (2) under the weights $w={(0.298,0.377,0.325)^{T}}$, the PL-PROMETHEE II method and the PL-TOPSIS method also have the same ranking results; (3) under the different weights, the method will have the same best and second choices. So the method proposed in this paper is effective and feasible.
Table 3
Comparing the methods.
Method by Methods Final result Ranking
Gou and Xu (2016) $\begin{array}[t]{l}\text{PL-TOPSIS}\\ {} (w={(0.2,0.1,0.7)^{T}})\end{array}$ $\begin{array}[t]{l}\mathit{CI}({A_{1}})=-3.752,\hspace{2.5pt}\mathit{CI}({A_{2}})=0,\\ {} \mathit{CI}({A_{3}})=-0.566,\hspace{2.5pt}\mathit{CI}({A_{4}})=-2.618\end{array}$ ${A_{2}}\succ {A_{3}}\succ {A_{4}}\succ {A_{1}}$
Gou and Xu (2016) $\begin{array}[t]{l}\text{PL-TOPSIS}\\ {} (w={(0.298,0.377,0.325)^{T}})\end{array}$ $\begin{array}[t]{l}\mathit{CI}({A_{1}})=-0.818,\mathit{CI}({A_{2}})=-0.117,\\ {} \mathit{CI}({A_{3}})=-0.125,\mathit{CI}({A_{4}})=-1.094\end{array}$ ${A_{2}}\succ {A_{3}}\succ {A_{1}}\succ {A_{4}}$
Proposed method $\begin{array}[t]{l}\text{PL-PROMETHEEII}\\ {} (w={(0.2,0.1,0.7)^{T}})\end{array}$ $\begin{array}[t]{l}\varphi ({A_{1}})=-2.37,\varphi ({A_{2}})=1.511\\ {} \varphi ({A_{3}})=1.359,\hspace{2.5pt}\varphi ({A_{4}})=-0.5\end{array}$ ${A_{2}}\succ {A_{3}}\succ {A_{4}}\succ {A_{1}}$
Proposed method $\begin{array}[t]{l}\text{PL-PROMETHEEII}\\ {} (w={(0.298,0.377,0.325)^{T}})\end{array}$ $\begin{array}[t]{l}\varphi ({A_{1}})=-0.732,\hspace{2.5pt}\varphi ({A_{2}})=0.767\\ {} \varphi ({A_{3}})=0.727,\varphi ({A_{4}})=-0.763\end{array}$ ${A_{2}}\succ {A_{3}}\succ {A_{1}}\succ {A_{4}}$
Secondly, we will show the advantage of the proposed method. Here, we will prove this from two aspects. The one is that our proposed method used the PROMETHEE method and the improved possibility degree formula. The improved possibility degree formula takes into account all the data, so it avoids the loss of information, provides a more accurate and convenient way to calculate the possibility degree; the last one is that we combine the PLTSs with the PROMETHEE method. The PLTs can express DMs thoughts more flexibly and accurately. Because the PLTSs allow the DMs to provide more than one LT with probability, the PLTs are more suitable to solve the practical problems.
1. The advantage from the improved possibility degree formula.
Here we use the same method and different possibility formula to make comparison, at the same time, we take the weight into consideration, so we also set two weights for comparison, and the comparison results are shown in the Table 4.
Table 4
Comparing with the methods for different possibility degrees.
Method by Methods Net flow $\varphi ({A_{i}})$ Ranking
Bai et al. (2017) $\begin{array}[t]{l}\text{PL-PROMETHEEII}\\ {} (w={(0.2,0.1,0.7)^{T}})\end{array}$ $\begin{array}[t]{l}\varphi ({A_{1}})=-2.371,\varphi ({A_{2}})=0.167\\ {} \varphi ({A_{3}})=1.753,\hspace{2.5pt}\varphi ({A_{4}})=0.451\end{array}$ ${A_{3}}\succ {A_{4}}\succ {A_{2}}\succ {A_{1}}$
Bai et al. (2017) $\begin{array}[t]{l}\text{PL-PROMETHEEII}\\ {} (w={(0.298,0.377,0.325)^{T}})\end{array}$ $\begin{array}[t]{l}\varphi ({A_{1}})=-0.732,\varphi ({A_{2}})=0.037\\ {} \varphi ({A_{3}})=1.013,\hspace{2.5pt}\varphi ({A_{4}})=-0.319\end{array}$ ${A_{3}}\succ {A_{2}}\succ {A_{4}}\succ {A_{1}}$
Proposed method $\begin{array}[t]{l}\text{PL-PROMETHEEII}\\ {} (w={(0.2,0.1,0.7)^{T}})\end{array}$ $\begin{array}[t]{l}\varphi ({A_{1}})=-2.37,\varphi ({A_{2}})=1.511\\ {} \varphi ({A_{3}})=1.359,\hspace{2.5pt}\varphi ({A_{4}})=-0.5\end{array}$ ${A_{2}}\succ {A_{3}}\succ {A_{4}}\succ {A_{1}}$
Proposed method $\begin{array}[t]{l}\text{PL-PROMETHEEII}\\ {} (w={(0.298,0.377,0.325)^{T}})\end{array}$ $\begin{array}[t]{l}\varphi ({A_{1}})=-0.732,\varphi ({A_{2}})=0.767\\ {} \varphi ({A_{3}})=0.727,\hspace{2.5pt}\varphi ({A_{4}})=-0.763\end{array}$ ${A_{2}}\succ {A_{\mathrm{3}}}\succ {A_{1}}\succ {A_{4}}$
From Table 4, we can find that (1) under the given weights $\omega ={(0.2,0.1,0.7)^{T}}$, the proposed method in this paper and the method proposed by Bai et al. (2017) have the different ranking results; (2) under the weights $w={(0.298,0.377,0.325)^{T}}$, the proposed method in this paper and the method proposed by Bai et al. (2017) have the different ranking results; (3) under the different weights, the methods also have the different results. From the Example 1, we can find that there are more than one PLT with only one PLE, but Bai et al. (2017) proposed the possibility degree formula cannot be calculated, it doesn’t take account of this situation. So, the results from the method (Bai et al., 2017) are inconsistent with the reality because it only considers the maximum and minimum values, ignores the medium PLEs information, and results in the loss of information and the deviation from the final result. The possibility degree formula proposed in this paper can overcome the shortcomings existing in the method proposed by Bai et al. (2017). So, the proposed method can produce a good solution to this problem.
To make it clear, we will explain it in the following details.
Compared with the possibility degree formula proposed by Bai et al. (2017), the improved possibility degree formula proposed in this paper has more advantages. Firstly, we can explain the existing weaknesses in the method of Bai et al. (2017). The first one is that when the PLTs only have one LT, this formula cannot give a correct result. The second one is that when the two PLTs have the same upper bound and lower bound (also have the same probability), this formula will lose the data information. However, the improved possibility degree formula proposed in this paper solved the weaknesses and further reduced the loss of information, improved the accuracy of final results.
2. The advantage from the PLTs.
The PLTs are extended from the HFLs, and based property of the HFLs that all terms have equal importance degree or weight, we can change the HFLs into the form of PLTs. It can be expressed as that all possible linguistic terms in HFLTs have same possibilities.
Here, we change the PLTs in Example 1 into the HFLs shown in Table 5 and Table 6.
Table 5
The decision matrix with HFL-PLTDs for Example 1.
${C_{1}}$ ${C_{2}}$ ${C_{3}}$
${A_{1}}$ $\{{s_{0}}(0.5),{s_{1}}(0.5)\}$ $\{{s_{2}}(1)\}$ $\{{s_{-1}}(0.5),{s_{0}}(0.5)\}$
${A_{2}}$ $\{{s_{2}}(0.5),{s_{3}}(0.5)\}$ $\{{s_{0}}(1)\}$ $\{{s_{1}}(0.33),{s_{2}}(0.33),{s_{3}}(0.33)\}$
${A_{3}}$ $\{{s_{1}}(1)\}$ $\{{s_{1}}(0.5),{s_{2}}(0.5)\}$ $\{{s_{2}}(0.5),{s_{3}}(0.5)\}$
${A_{4}}$ $\{{s_{2}}(0.5),{s_{3}}(0.5)\}$ $\{{s_{-2}}(0.25),{s_{-1}}(0.25),{s_{0}}(0.25),{s_{1}}(0.25)\}$ $\{{s_{1}}(1)\}$
Table 6
The normalized decision matrix with HFL-PLTDs for Example 1.
${C_{1}}$ ${C_{2}}$ ${C_{3}}$
${A_{1}}$ $\{{s_{0}}(0),{s_{0}}(0),{s_{0}}(0.5),{s_{1}}(0.5)\}$ $\{{s_{2}}(0),{s_{2}}(0),{s_{2}}(0),{s_{2}}(1)\}$ $\{{s_{-1}}(0),{s_{-1}}(0),{s_{-1}}(0.5),{s_{0}}(0.5)\}$
${A_{2}}$ $\{{s_{2}}(0),{s_{2}}(0),{s_{2}}(0.5),{s_{3}}(0.5)\}$ $\{{s_{0}}(0),{s_{0}}(0),{s_{0}}(0),{s_{0}}(1)\}$ $\{{s_{1}}(0),{s_{1}}(0.33),{s_{2}}(0.33),{s_{3}}(0.33)\}$
${A_{3}}$ $\{{s_{1}}(0),{s_{1}}(0),{s_{1}}(0),{s_{1}}(1)\}$ $\{{s_{1}}(0),{s_{1}}(0),{s_{1}}(0.5),{s_{2}}(0.5)\}$ $\{{s_{2}}(0),{s_{2}}(0),{s_{2}}(0.5),{s_{3}}(0.5)\}$
${A_{4}}$ $\{{s_{2}}(0),{s_{2}}(0),{s_{2}}(0.5),{s_{3}}(0.5)\}$ $\{{s_{-2}}(0.25),{s_{-1}}(0.25),{s_{0}}(0.25),{s_{1}}(0.25)\}$ $\{{s_{1}}(0),{s_{1}}(0),{s_{1}}(0),{s_{1}}(1)\}$
Then we used the same method and same weight to get the ranking results shown in Table 7.
Table 7
Ranking results by different methods with HFL-PLTDs for Example 1.
Method by Methods Expected values $E({L_{i}})$ Ranking
Extended method $\begin{array}[t]{l}\text{HFLR-PROMETHEEII}\\ {} (w={(0.2,0.1,0.7)^{T}})\end{array}$ $\begin{array}[t]{l}\varphi ({A_{1}})=-2.377,\hspace{2.5pt}\varphi ({A_{2}})=0.795\\ {} \varphi ({A_{3}})=1.458,\hspace{2.5pt}\varphi ({A_{4}})=0.124\end{array}$ ${A_{3}}\succ {A_{2}}\succ {A_{4}}\succ {A_{1}}$
Proposed method $\begin{array}[t]{l}\text{PL-PROMETHEEII}\\ {} (w={(0.2,0.1,0.7)^{T}})\end{array}$ $\begin{array}[t]{l}\varphi ({A_{1}})=-2.370,\hspace{2.5pt}\varphi ({A_{2}})=1.511\\ {} \varphi ({A_{3}})=1.359,\hspace{2.5pt}\varphi ({A_{4}})=-0.500\end{array}$ ${A_{2}}\succ {A_{3}}\succ {A_{4}}\succ {A_{1}}$
From Table 7, we can find that there are the different ranking results by using two methods. The two methods are using the same weight, and same PROMETHEE II method, they just use the different LTs. Because of the all of the HFLs terms have equal probability, it seem that it ignores the effect of the probability. In the face of practical problems, probability has a great effect. So, the PLTs have better advantages to deal with practical problems and are more effective than HFLs.

5 Conclusion

The PLTSs can reflect different importance degrees or weights of all possible LTs, so they can more fully express the linguistic fuzzy information, and now some MADM methods have been extended to process the PLTSs. In addition, the PROMETHEE II method is the simple and flexible total ranking method, and its flexibility is manifested in its use of different preference functions. In this paper, we proposed the improved possibility degree formula, and applied it to the preference function. Further, we proposed the PL-PROMETHEE II method. Finally, we use the examples to show that this method is more flexible and general to solve the MAGDM problems with the PLI than the existing methods.
In further research, it is necessary to use the proposed method to solve some real decision making problems, or the proposed method is extended to some new fuzzy information and to the consensus models and heterogeneous models or models under incomplete preferences (Capuano et al., 2017; Liu et al., 2017a; Zhang et al., 2017).

Acknowledgements

This paper is supported by the National Natural Science Foundation of China (Nos. 71771140 and 71471172), the Special Funds of Taishan Scholars Project of Shandong Province (No. ts201511045), Shandong Provincial Social Science Planning Project (Nos. 17BGLJ04, 16CGLJ31 and 16CKJJ27), the Natural Science Foundation of Shandong Province (No. ZR2017MG007), and Key Research and Development Program of Shandong Province (No. 2016GNC110016).

References

 
Bai, C.Z., Zhang, R., Qian, L.X., Wu, Y.N. (2017). Comparisons of probabilistic linguistic term sets for multi-criteria decision making. Knowledge-Based Systems, 119(C), 284–291.
 
Brans, J.P., Vincke, P.A. (1985). A preference ranking organization method: the PROMETHEE method for MCDM. Management Science, 31(6), 647–656.
 
Cabrerizo, F.J., Ureńa, M.R., Pedrycz, W., Herrera-Viedma, E. (2014). Building consensus in group decision making with an allocation of information granularity. Fuzzy Sets and Systems, 255, 115–127.
 
Cabrerizo, F.J., Al-Hmouz, R., Morfeq, A., Balamash, A.S., Martínez, M.A., Herrera-Viedma, E. (2017). Soft consensus measures in group decision making using unbalanced fuzzy linguistic information. Soft Computing, 21(11), 3037–3050.
 
Capuano, N., Chiclana, F., Fujita, H., Herrera-Viedma, E., Loiam, V. (2017). Fuzzy group decision making with incomplete information guided by social influence. IEEE Transactions on Fuzzy Systems. https://doi.org/10.1109/TFUZZ.2017.2744605. In press.
 
Chen, C.T., Hung, W.Z., Cheng, H.L. (2011). Applying linguistic PROMETHEE method in investment portfolio decision-making. International Journal of Electronic Business Management, 9(2), 139–148.
 
Chiclana, F., Tapia Garcia, J.M., Del Moral, M.J., Herrera-Viedma, E. (2013). A statistical comparative study of different similarity measures of consensus in group decision making. Information Sciences, 221(2), 110–123.
 
Del Moral, M.J., Chiclana, F., Tapia, J.M., Herrera-Viedma, E. (2017). A comparative study on consensus measures in group decision making. International Journal of Intelligent Systems. https://doi.org/10.1002/int.21954. In press.
 
Dong, Y.C., Li, C.C., Herrera, F. (2015). An optimization-based approach to adjusting unbalanced linguistic preference relations to obtain a required consistency level. Information Sciences, 292(5), 27–38.
 
Gomes, L.F.A.M., Lima, M.M.P.P. (1991). Todim: basic and application to multicriteria ranking of projects with environmental impacts. Paris, 16(1), 113–127.
 
Gou, X., Xu, Z. (2016). Novel basic operational laws for linguistic terms, hesitant fuzzy linguistic term sets and probabilistic linguistic term sets. Information Sciences, 372, 407–427.
 
Greco, S., Kadziński, M., Mousseau, V., Słowiński, R. (2011). ELECTRE: robust ordinal regression for outranking methods. European Journal of Operational Research, 214(1), 118–135.
 
Li, A.J., Quan, X.C., Wang, Y., Tao, K., Guo, L.Y. (2012). Selection of contaminated site soil remediation technology based on PROMETHEE II. Chinese Journal of Environmental Engineering, 6(10), 3767–3773.
 
Li, Y.H., Liu, P., Chen, Y. (2014). Some single valued neutrosophic number heronian mean operators and their application in multiple attribute group decision making. Informatica, 27(1), 85–110.
 
Li, C.C., Dong, Y., Herrera, F., Herrera-Viedma, E., Martínez, L. (2017). Personalized individual semantics in computing with words for supporting linguistic group decision making, an application on consensus reaching. Information Fusion, 33(1), 29–40.
 
Lin, M.W., Xu, Z.S. (2017). Probabilistic linguistic distance measures and their applications in multi-criteria group decision making. Soft Computing Applications for Group Decision-Making and Consensus Modeling, 357, 411–440.
 
Liu, P. (2013). Some Hamacher aggregation operators based interval valued intuitionistic fuzzy numbers and their application to group decision making. Application Math Model, 37(4), 2430–2444.
 
Liu, P. (2017). Multiple attribute group decision making method based on interval-valued intuitionistic fuzzy power Heronian aggregation operators. Computers & Industrial Engineering, 108, 199–212.
 
Liu, P., Chen, S.M. (2017). Group decision making based on Heronian aggregation operators of intuitionistic fuzzy numbers. IEEE Transactions on Cybernetics, 47(9), 2514–2530.
 
Liu, P., Jin, F. (2012). Methods for aggregating intuitionistic uncertain linguistic variables and their application to group decision making. Information Sciences, 205(1), 58–71.
 
Liu, P., Li, H. (2017). Interval-valued intuitionistic fuzzy power Bonferroni aggregation operators and their application to group decision making. Cognitive Computation, 9(4), 494–512.
 
Liu, P., Shi, L. (2017). Some Neutrosophic uncertain linguistic number Heronian mean operators and their application to multi-attribute group decision making. Neural Computing and Applications, 28(5), 1079–1093.
 
Liu, P., Su, Y. (2012). Multiple attribute decision making method based on the trapezoid fuzzy linguistic hybrid harmonic averaging operator. Informatica, 36(1), 83–90.
 
Liu, P., Tang, G. (2016). Multi-criteria group decision-making based on interval neutrosophic uncertain linguistic variables and Choquet integral. Cognitive Computation, 8(6), 1036–1056.
 
Liu, P., Teng, F. (2016). An extended TODIM method for multiple attribute group decision-making based on 2-dimension uncertain linguistic variable. Complexity, 21(5), 20–30.
 
Liu, P., Wang, P. (2017). Some improved linguistic intuitionistic fuzzy aggregation operators and their applications to multiple-attribute decision making. International Journal of Information Technology & Decision Making, 16(3), 817–850.
 
Liu, P., He, L., Yu, X.C. (2016a). Generalized hybrid aggregation operators based on the 2-dimension uncertain linguistic information for multiple attribute group decision making. Group Decision and Negotiation, 25(1), 103–126.
 
Liu, P., Zhang, L., Liu, X., Wang, P. (2016b). Multi-valued Neutrosophic number Bonferroni mean operators and their application in multiple attribute group decision making. International Journal of Information Technology & Decision Making, 15(5), 1181–1210.
 
Liu, P., Chen, S.M., Liu, J. (2017a). Some intuitionistic fuzzy interaction partitioned Bonferroni mean operators and their application to multi-attribute group decision making. Information Sciences, 411, 98–121.
 
Liu, W., Dong, Y., Chiclana, F., Cabrerizo, F.J., Herrera-Viedma, E. (2017b). Group decision-making based on heterogeneous preference relations with self-confidence. Fuzzy Optimization and Decision Making, 16(4), 429–447.
 
Massanet, S., Riera, J.V., Torrens, J., Herrera-Viedma, E. (2014). A new linguistic computational model based on discrete fuzzy numbers for computing with words. Information Sciences, 258, 277–290.
 
Milica, L., Milena, J. (2016). The comparison of the energy performance of hotel buildings using PROMETHEE decision-making method. Thermal Science, 20(1), 197–208.
 
Morente-Molinera, J.A., Mezei, J., Carlsson, C., Herrera-Viedma, E. (2017). Improving supervised learning classification methods using multi-granular linguistic modelling and fuzzy entropy. IEEE Transactions on Fuzzy Systems, 25(5), 1078–1089.
 
Pang, Q., Xu, Z.S., Wang, H. (2016). Probabilistic linguistic term sets in multi-attribute group decision making. Information Sciences, 369, 128–143.
 
Rodriguez, R.M., Martinez, L., Herrera, F. (2012). Hesitant fuzzy linguistic term sets for decision making. IEEE Transactions on Fuzzy Systems, 20(1), 109–119.
 
Senvar, O., Tuzkaya, G., Kahraman, C. (2014). Multi criteria supplier selection using fuzzy PROMETHEE method. Supply Chain Management Under Fuzziness, 313, 21–34.
 
Tan, Q.Y., Feng, X.Q., Zhang, H.R. (2016). Hesitant fuzzy language based on possibility degree PROMETHEE. Statistics Methods and Decision Making, 9, 82–85.
 
Turskis, Z., Zavadskas, E.K. (2010). A novel method for multiple criteria analysis: grey additive ratio assessment (ARAS-G) method. Informatica, 21(4), 597–610.
 
Wang, J.Q., Li, J.J. (2009). The multi-criteria group decision making method based on multi-granularity intuitionistic two semantics. Science & Technology Information, 33, 8–9.
 
Wang, S., Liu, J. (2017). Extension of the TODIM method to intuitionistic linguistic multiple attribute decision making. Symmetry, 9(6), 95. https://doi.org/10.3390/sym9060095.
 
Wu, J., Chiclana, F., Herrera-Viedma, E. (2015). Trust based consensus model for social network in an incomplete linguistic information context. Applied Soft Computing, 35, 827–839.
 
Ye, J. (2017). Multiple attribute decision-making method using correlation coefficients of normal neutrosophic sets. Symmetry, 9(6), 80. https://doi.org/10.3390/sym9060080.
 
You, X., Chen, T., Yang, Q. (2016). Approach to multi-criteria group decision-making problems based on the best-worst-method and ELECTRE method. Symmetry, 8(9), 95. https://doi.org/10.3390/sym8090095.
 
Zadeh, L.A. (1975a). The concept of a linguistic variable and its application to approximate reasoning – I. Information Science, 8(3), 199–249.
 
Zadeh, L.A. (1975b). The concept of a linguistic variable and its application to approximate reasoning – II. Information Science, 8, 301–353.
 
Zadeh, L.A. (1975c). The concept of a linguistic variable and its application to approximate reasoning – III. Information Science, 9, 43–80.
 
Zhang, X.L., Xing, X.M. (2017). Probabilistic linguistic VIKOR method to evaluate green supply chain initiatives. Sustainability, 9(7), 1231.
 
Zhang, H.J., Dong, Y., Herrera-Viedma, E. (2017). Consensus building for the heterogeneous large-scale GDM with the individual concerns and satisfactions. IEEE Transaction on Fuzzy Systems. https://doi.org/10.1109/TFUZZ.2017.2697403.
 
Zhao, J., You, X.Y., Liu, H.C., Wu, S.M. (2017). An extended VIKOR method using intuitionistic fuzzy sets and combination weights for supplier selection. Symmetry, 9(9), 169. https://doi.org/10.3390/sym9090169.

Biographies

Liu Peide
Peide.liu@gmail.com

P. Liu received the BS and MS degrees in signal and information processing from Southeast University, Nanjing, China, in 1988 and 1991, respectively, and the PhD degree in information management from Beijing Jiaotong University, Beijing, China, in 2010. He is currently a professor with the School of Management Science and Engineering, Shandong University of Finance and Economics, Shandong, China. He is an associate editor of the Journal of Intelligent and Fuzzy Systems, the editorial board of the journal Technological and Economic Development of Economy, and the members of editorial board of the other 12 journals. He has authored or coauthored more than 200 publications. His research interests include aggregation operators, fuzzy logic, fuzzy decision making, and their applications.

Li Ying

Y. Li received the BS degrees in logistics management from Shandong University of Finance and Economics, Jinan, China, in 2015. She is studying for her master in management science and engineering from Shandong University of Finance and Economics. Her research interests include aggregation operators, fuzzy logic, fuzzy decision making, and their applications.


Reading mode PDF XML

Table of contents
  • 1 Introduction
  • 2 Preliminaries
  • 3 PL-PROMETHEE II Method
  • 4 Numerical Example
  • 5 Conclusion
  • Acknowledgements
  • References
  • Biographies

Copyright
© 2018 Vilnius University
by logo by logo
Open access article under the CC BY license.

Keywords
probabilistic linguistic term PROMETHEE II MADM

Metrics
since January 2020
1537

Article info
views

724

Full article
views

651

PDF
downloads

252

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Tables
    7
Table 1
The decision matrix with PLTs for Example 1.
Table 2
The normalized decision matrix with PLTs for Example 1.
Table 3
Comparing the methods.
Table 4
Comparing with the methods for different possibility degrees.
Table 5
The decision matrix with HFL-PLTDs for Example 1.
Table 6
The normalized decision matrix with HFL-PLTDs for Example 1.
Table 7
Ranking results by different methods with HFL-PLTDs for Example 1.
Table 1
The decision matrix with PLTs for Example 1.
${C_{1}}$ ${C_{2}}$ ${C_{3}}$
${A_{1}}$ $\{{s_{0}}(0.4),{s_{1}}(0.6)\}$ $\{{s_{2}}(1)\}$ $\{{s_{-1}}(0.2),{s_{0}}(0.8)\}$
${A_{2}}$ $\{{s_{2}}(0.3),{s_{3}}(0.7)\}$ $\{{s_{0}}(1)\}$ $\{{s_{1}}(0.2),{s_{2}}(0.4),{s_{3}}(0.4)\}$
${A_{3}}$ $\{{s_{1}}(1)\}$ $\{{s_{1}}(0.5),{s_{2}}(0.5)\}$ $\{{s_{2}}(0.6),{s_{3}}(0.4)\}$
${A_{4}}$ $\{{s_{2}}(0.5),{s_{3}}(0.5)\}$ $\{{s_{-2}}(0.4),{s_{-1}}(0.1),{s_{0}}(0.2),{s_{1}}(0.3)\}$ $\{{s_{1}}(1)\}$
Table 2
The normalized decision matrix with PLTs for Example 1.
${C_{1}}$ ${C_{2}}$ ${C_{3}}$
${A_{1}}$ $\{{s_{0}}(0),{s_{0}}(0),{s_{0}}(0.4),{s_{1}}(0.6)\}$ $\{{s_{2}}(0),{s_{2}}(0),{s_{2}}(0),{s_{2}}(1)\}$ $\{{s_{-1}}(0),{s_{-1}}(0),{s_{-1}}(0.2),{s_{0}}(0.8)\}$
${A_{2}}$ $\{{s_{2}}(0),{s_{2}}(0),{s_{2}}(0.3),{s_{3}}(0.7)\}$ $\{{s_{0}}(0),{s_{0}}(0),{s_{0}}(0),{s_{0}}(1)\}$ $\{{s_{1}}(0),{s_{1}}(0.2),{s_{2}}(0.4),{s_{3}}(0.4)\}$
${A_{3}}$ $\{{s_{1}}(0),{s_{1}}(0),{s_{1}}(0),{s_{1}}(1)\}$ $\{{s_{1}}(0),{s_{1}}(0),{s_{1}}(0.5),{s_{2}}(0.5)\}$ $\{{s_{2}}(0),{s_{2}}(0),{s_{2}}(0.6),{s_{3}}(0.4)\}$
${A_{4}}$ $\{{s_{2}}(0),{s_{2}}(0),{s_{2}}(0.5),{s_{3}}(0.5)\}$ $\{{s_{-2}}(0.4),{s_{-1}}(0.1),{s_{0}}(0.2),{s_{1}}(0.3)\}$ $\{{s_{1}}(0),{s_{1}}(0),{s_{1}}(0),{s_{1}}(1)\}$
Table 3
Comparing the methods.
Method by Methods Final result Ranking
Gou and Xu (2016) $\begin{array}[t]{l}\text{PL-TOPSIS}\\ {} (w={(0.2,0.1,0.7)^{T}})\end{array}$ $\begin{array}[t]{l}\mathit{CI}({A_{1}})=-3.752,\hspace{2.5pt}\mathit{CI}({A_{2}})=0,\\ {} \mathit{CI}({A_{3}})=-0.566,\hspace{2.5pt}\mathit{CI}({A_{4}})=-2.618\end{array}$ ${A_{2}}\succ {A_{3}}\succ {A_{4}}\succ {A_{1}}$
Gou and Xu (2016) $\begin{array}[t]{l}\text{PL-TOPSIS}\\ {} (w={(0.298,0.377,0.325)^{T}})\end{array}$ $\begin{array}[t]{l}\mathit{CI}({A_{1}})=-0.818,\mathit{CI}({A_{2}})=-0.117,\\ {} \mathit{CI}({A_{3}})=-0.125,\mathit{CI}({A_{4}})=-1.094\end{array}$ ${A_{2}}\succ {A_{3}}\succ {A_{1}}\succ {A_{4}}$
Proposed method $\begin{array}[t]{l}\text{PL-PROMETHEEII}\\ {} (w={(0.2,0.1,0.7)^{T}})\end{array}$ $\begin{array}[t]{l}\varphi ({A_{1}})=-2.37,\varphi ({A_{2}})=1.511\\ {} \varphi ({A_{3}})=1.359,\hspace{2.5pt}\varphi ({A_{4}})=-0.5\end{array}$ ${A_{2}}\succ {A_{3}}\succ {A_{4}}\succ {A_{1}}$
Proposed method $\begin{array}[t]{l}\text{PL-PROMETHEEII}\\ {} (w={(0.298,0.377,0.325)^{T}})\end{array}$ $\begin{array}[t]{l}\varphi ({A_{1}})=-0.732,\hspace{2.5pt}\varphi ({A_{2}})=0.767\\ {} \varphi ({A_{3}})=0.727,\varphi ({A_{4}})=-0.763\end{array}$ ${A_{2}}\succ {A_{3}}\succ {A_{1}}\succ {A_{4}}$
Table 4
Comparing with the methods for different possibility degrees.
Method by Methods Net flow $\varphi ({A_{i}})$ Ranking
Bai et al. (2017) $\begin{array}[t]{l}\text{PL-PROMETHEEII}\\ {} (w={(0.2,0.1,0.7)^{T}})\end{array}$ $\begin{array}[t]{l}\varphi ({A_{1}})=-2.371,\varphi ({A_{2}})=0.167\\ {} \varphi ({A_{3}})=1.753,\hspace{2.5pt}\varphi ({A_{4}})=0.451\end{array}$ ${A_{3}}\succ {A_{4}}\succ {A_{2}}\succ {A_{1}}$
Bai et al. (2017) $\begin{array}[t]{l}\text{PL-PROMETHEEII}\\ {} (w={(0.298,0.377,0.325)^{T}})\end{array}$ $\begin{array}[t]{l}\varphi ({A_{1}})=-0.732,\varphi ({A_{2}})=0.037\\ {} \varphi ({A_{3}})=1.013,\hspace{2.5pt}\varphi ({A_{4}})=-0.319\end{array}$ ${A_{3}}\succ {A_{2}}\succ {A_{4}}\succ {A_{1}}$
Proposed method $\begin{array}[t]{l}\text{PL-PROMETHEEII}\\ {} (w={(0.2,0.1,0.7)^{T}})\end{array}$ $\begin{array}[t]{l}\varphi ({A_{1}})=-2.37,\varphi ({A_{2}})=1.511\\ {} \varphi ({A_{3}})=1.359,\hspace{2.5pt}\varphi ({A_{4}})=-0.5\end{array}$ ${A_{2}}\succ {A_{3}}\succ {A_{4}}\succ {A_{1}}$
Proposed method $\begin{array}[t]{l}\text{PL-PROMETHEEII}\\ {} (w={(0.298,0.377,0.325)^{T}})\end{array}$ $\begin{array}[t]{l}\varphi ({A_{1}})=-0.732,\varphi ({A_{2}})=0.767\\ {} \varphi ({A_{3}})=0.727,\hspace{2.5pt}\varphi ({A_{4}})=-0.763\end{array}$ ${A_{2}}\succ {A_{\mathrm{3}}}\succ {A_{1}}\succ {A_{4}}$
Table 5
The decision matrix with HFL-PLTDs for Example 1.
${C_{1}}$ ${C_{2}}$ ${C_{3}}$
${A_{1}}$ $\{{s_{0}}(0.5),{s_{1}}(0.5)\}$ $\{{s_{2}}(1)\}$ $\{{s_{-1}}(0.5),{s_{0}}(0.5)\}$
${A_{2}}$ $\{{s_{2}}(0.5),{s_{3}}(0.5)\}$ $\{{s_{0}}(1)\}$ $\{{s_{1}}(0.33),{s_{2}}(0.33),{s_{3}}(0.33)\}$
${A_{3}}$ $\{{s_{1}}(1)\}$ $\{{s_{1}}(0.5),{s_{2}}(0.5)\}$ $\{{s_{2}}(0.5),{s_{3}}(0.5)\}$
${A_{4}}$ $\{{s_{2}}(0.5),{s_{3}}(0.5)\}$ $\{{s_{-2}}(0.25),{s_{-1}}(0.25),{s_{0}}(0.25),{s_{1}}(0.25)\}$ $\{{s_{1}}(1)\}$
Table 6
The normalized decision matrix with HFL-PLTDs for Example 1.
${C_{1}}$ ${C_{2}}$ ${C_{3}}$
${A_{1}}$ $\{{s_{0}}(0),{s_{0}}(0),{s_{0}}(0.5),{s_{1}}(0.5)\}$ $\{{s_{2}}(0),{s_{2}}(0),{s_{2}}(0),{s_{2}}(1)\}$ $\{{s_{-1}}(0),{s_{-1}}(0),{s_{-1}}(0.5),{s_{0}}(0.5)\}$
${A_{2}}$ $\{{s_{2}}(0),{s_{2}}(0),{s_{2}}(0.5),{s_{3}}(0.5)\}$ $\{{s_{0}}(0),{s_{0}}(0),{s_{0}}(0),{s_{0}}(1)\}$ $\{{s_{1}}(0),{s_{1}}(0.33),{s_{2}}(0.33),{s_{3}}(0.33)\}$
${A_{3}}$ $\{{s_{1}}(0),{s_{1}}(0),{s_{1}}(0),{s_{1}}(1)\}$ $\{{s_{1}}(0),{s_{1}}(0),{s_{1}}(0.5),{s_{2}}(0.5)\}$ $\{{s_{2}}(0),{s_{2}}(0),{s_{2}}(0.5),{s_{3}}(0.5)\}$
${A_{4}}$ $\{{s_{2}}(0),{s_{2}}(0),{s_{2}}(0.5),{s_{3}}(0.5)\}$ $\{{s_{-2}}(0.25),{s_{-1}}(0.25),{s_{0}}(0.25),{s_{1}}(0.25)\}$ $\{{s_{1}}(0),{s_{1}}(0),{s_{1}}(0),{s_{1}}(1)\}$
Table 7
Ranking results by different methods with HFL-PLTDs for Example 1.
Method by Methods Expected values $E({L_{i}})$ Ranking
Extended method $\begin{array}[t]{l}\text{HFLR-PROMETHEEII}\\ {} (w={(0.2,0.1,0.7)^{T}})\end{array}$ $\begin{array}[t]{l}\varphi ({A_{1}})=-2.377,\hspace{2.5pt}\varphi ({A_{2}})=0.795\\ {} \varphi ({A_{3}})=1.458,\hspace{2.5pt}\varphi ({A_{4}})=0.124\end{array}$ ${A_{3}}\succ {A_{2}}\succ {A_{4}}\succ {A_{1}}$
Proposed method $\begin{array}[t]{l}\text{PL-PROMETHEEII}\\ {} (w={(0.2,0.1,0.7)^{T}})\end{array}$ $\begin{array}[t]{l}\varphi ({A_{1}})=-2.370,\hspace{2.5pt}\varphi ({A_{2}})=1.511\\ {} \varphi ({A_{3}})=1.359,\hspace{2.5pt}\varphi ({A_{4}})=-0.500\end{array}$ ${A_{2}}\succ {A_{3}}\succ {A_{4}}\succ {A_{1}}$

INFORMATICA

  • Online ISSN: 1822-8844
  • Print ISSN: 0868-4952
  • Copyright © 2023 Vilnius University

About

  • About journal

For contributors

  • OA Policy
  • Submit your article
  • Instructions for Referees
    •  

    •  

Contact us

  • Institute of Data Science and Digital Technologies
  • Vilnius University

    Akademijos St. 4

    08412 Vilnius, Lithuania

    Phone: (+370 5) 2109 338

    E-mail: informatica@mii.vu.lt

    https://informatica.vu.lt/journal/INFORMATICA
Powered by PubliMill  •  Privacy policy