1 Introduction
Due to the uncertainty and complexity of decision environment, as well as the ambiguity of human thinking, it is impossible to accurately describe the attribute values of the MADM problems by crisp numbers (Liu,
2017; Li
et al.,
2014; Liu and Su,
2012; Liu and Chen,
2017; Liu and Li,
2017; Liu
et al.,
2016b,
2017a,
2017b; Ye,
2017), however, it can be depicted more conveniently by linguistic terms (LTs). For example, when the risk of an investment object is evaluated, decision makers (DMs) are more likely to using “high”, “medium”, “low” and other similar LTs to express their assessing results (Dong
et al.,
2015; Wu
et al.,
2015), i.e. LTs are more consistent with people’s habits of thinking. In order to scientifically use the LTs, Zadeh (
1975a,
1975b,
1975c) firstly proposed the linguistic variable (LV) which provided the foundation about the linguistic MADM (LMADM). Further, LVs are extended to some new types for the different fuzzy information, such as intuitionistic linguistic sets and intuitionistic linguistic numbers (Liu and Wang,
2017; Wang and Li,
2009), intuitionistic uncertain LV (IULV) (Liu and Jin,
2012), interval-valued IULV (Liu,
2013), 2-dimension uncertain LVs (Liu
et al.,
2016a; Liu and Teng,
2016), neutrosophic uncertain LVs (Liu and Shi,
2017; Liu and Tang,
2016) and so on.
Now, there are many linguistic models to process the linguistic information, such as LVs, the granular LTs (Cabrerizo
et al.,
2014) and unbalanced LTs (Cabrerizo
et al.,
2017). Obviously, these models can express the preferences or assess judgments of DMs only by one LT. However, in practice, the DMs may have some hesitations on several possible LTs. In order to deal with such situation, Rodriguez
et al. (
2012) proposed the hesitant fuzzy LTSs (HFLTSs). HFLTSs consist of some possible LTs provided by the DMs and all of these terms have of degrees or weights equal importance. However, the HFLTSs have caused some new thinking about whether all the LTs exactly have the same important degree, and if not, how to describe them. Based on this question, Pang
et al. (
2016) extended the HFLTSs to a more general concept, named as probabilistic LTSs (PLTSs). The PLTSs allow the DMs to provide more than one LT with probability which can express importance degrees or weights of all the possible evaluation values. In the PLTSs, LTs can be expressed by multi-granular (Cabrerizo
et al.,
2014) or unbalanced linguistic form Cabrerizo
et al. (
2017). Those improve the flexibility of the expression of linguistic information. So, compared to other expressed linguistic information, the PLTs are more suitable to solve the practical problems. At present, there are some studies about PLTs, for example, Gou and Xu (
2016) proposed some novel operational laws about the PLTs, Bai
et al. (
2017) proposed the possibility degree formula for PLTSs, Pang
et al. (
2016) extended TOPSIS method to the PLTs, Zhang and Xing (
2017) extended VIKOR method to the PLTs.
There are many traditional MADM methods, such as TOPSIS method (Pang
et al.,
2016), VIKOR method (Tan
et al.,
2016; Zhang
et al.,
2017), TODIM method (Gomes and Lima,
1991; Liu
et al.,
2017a; Wang and Liu,
2017), ELECTRE method (Greco
et al.,
2011; You
et al.,
2016) and PROMETHEE method (Brans and Vincke,
1985), and other well-known decision models like consensus model (Chiclana
et al.,
2013; Morente-Molinera
et al.,
2017) and Grey Additive Ratio Assessment Method (Turskis and Zavadskas,
2010). Each has its advantages. The TOPSIS method (Pang
et al.,
2016) determines a best solution with the shortest distance to the ideal solution and the farthest distance to the negative-ideal solution; the VIKOR method (Tan
et al.,
2016; Zhao
et al.,
2017) can give some compromise alternatives based on some conflicting criteria. TODIM method (Gomes and Lima,
1991; Liu
et al.,
2017b; Wang and Liu,
2017) can consider DMs’ bounded rationality, and the consensus model (Chiclana
et al.,
2013; Morente-Molinera
et al.,
2017) is a dynamic and iterative group discussion process. Now they can be extended to many linguistic environments (Li
et al.,
2017), such as multi-granular LV (Cabrerizo
et al.,
2014; Morente-Molinera
et al.,
2017), unbalanced LV (Cabrerizo
et al.,
2017), discrete LV (Massanet
et al.,
2014). The PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluations) method was first proposed by Brans and Vincke (
1985) in 1980s, and its advantage is that it is simpler than other ranking methods and it can give a total ranking method which can comprise preferences as well as indifferences. In general, it includes PROMETHEE I method and PROMETHEE II method. The PROMETHEE I method gives a partial ranking of the alternatives, including possible incomparability, and the PROMETHEE II method provides a total ranking with the net flow. Obviously, PROMETHEE II method is better than PROMETHEE I because it includes no incomparability even when the comparison is difficult. Now, the PROMETHEE II method has been applied to many areas, for example, Senvar
et al. (
2014) applied the PROMETHEE II method to multiple criteria supplier selection, Milica and Milena (
2016) applied the PROMETHEE II method to hotel energy performance comparison. In addition, the studies on PROMETHEE II method with the different preference function also gain great attention (Chen
et al.,
2011; Li
et al.,
2012), and Tan
et al. (
2016) extended the PROMETHEE II method to HFL environment. However, the existing PROMETHEE II method cannot process the PLTs.
The PLT’s possibility degree formula was firstly proposed by Bai
et al. (
2017), and the aim of the possibility degree formula is to rank the PLTs in a more suitable and accurate way. It solves some weaknesses that exist in other ranking methods. And to some extent, it can reduce the loss of the information. But in the practical experiments, we find this possibility degree formula also has some weaknesses. The first one is that when the PLTs only have one LT, this formula cannot calculate the correct answer. The second one is that when the two PLTSs have the same upper limit and lower limit (also have the same probability), this formula will ignore the data information. So, in this paper, motivated by Tan
et al. (
2016), we firstly proposed an improved PLT’s possibility degree formula which is applied to the PROMETHEE II method preference function, then the PROMETHEE II method is extended to HFL environment, and the preference function used the HFL’s possibility degree formula.
As discussed above, we can see the PLTSs allow DMs to express their preferences on some LTs with different probabilities, and PROMETHEE II method is simple and flexible total ranking method. Its flexibility is manifested in its use of different preference functions, and here, we proposes the improved possibility degree formula, and applied it to the preference function. In addition, in order to get objective weight of the attributes, we used the maximum deviation method to calculate the weights. Therefore, it is meaningful and necessary to extend the PROMETHEE II method to PLI environment. Motivated by this idea, the goal and contributions of this paper are (1) to extend PROMETHEE II method to PLI and propose PL-PROMETHEE II method; (2) to propose the improved possibility degree formula; (3) to develop a weight determination method based on the PL distance measures; (4) to show the feasibility and advantages of the proposed methods.
In order to achieve above goal, the remainder of this paper is set as follows. Section
2 gives some basic concepts of PLI and the PROMETHEE II method, and proposes the improved possibility degree formula. In Section
3, we extend the PROMETHEE II method to the PLI environment (PL-PROMETHEE II). In Section
4, we give an example to illustrate the effectiveness of proposed method. In Section
5, we give the conclusions and the direction of future studies.
2 Preliminaries
In this part, we introduce some concepts so as to easily understand this study for readers.
2.1 PLTS
Definition 1 (See Pang et al., 2016).
Let
${S_{1}}=\{{s_{\alpha }}\mid \alpha =-\tau ,\dots ,-1,0,1,\dots ,\tau \}$ be a LTS, a PLTS is defined as:
where
${\mathit{LS}^{(k)}}({p^{(k)}})$ represents the LT
${\mathit{LS}^{(k)}}$ with the probability
${p^{(k)}}$, and
$\mathrm{\# }\mathit{LS}(p)$ is the number of all different LTs in
$\mathit{LS}(p)$.
2.2 Normalization of PLTS
Definition 2 (See Pang et al., 2016).
Given a PLTS $\mathit{LS}(p)$ with ${\textstyle\sum _{k=1}^{\mathrm{\# }\mathit{LS}(p)}}{p^{(k)}}<1$, the associated PLTS $\dot{L}S(p)=\{{\mathit{LS}^{(k)}}({\dot{p}^{(k)}})\mid k=1,2,\dots ,\mathrm{\# }\mathit{LS}(p)\}$ is called as normalized PLTS, where ${\dot{p}^{(k)}}={p^{(k)}}/{\textstyle\sum _{k=1}^{\mathrm{\# }\mathit{LS}(p)}}{p^{(k)}}$ for all $k=1,2,\dots ,\mathrm{\# }\mathit{LS}(p)$. Obviously, in PLTS $\dot{L}S(p)$, there is ${\textstyle\sum _{k=1}^{\mathrm{\# }\mathit{LS}(p)}}{\dot{p}^{(k)}}=1$.
Based on Definition
2, the probabilities of all LTs are normalized.
Definition 3 (See Pang et al., 2016).
For any two PLTSs
${\mathit{LS}_{1}}(p)=\{{\mathit{LS}_{1}^{(k)}}({p_{1}^{(k)}})\mid k=1,2,\dots ,\mathrm{\# }{\mathit{LS}_{1}}(p)\}$ and
${\mathit{LS}_{2}}(p)=\{{\mathit{LS}_{2}^{(k)}}({p_{2}^{(k)}})\mid k=1,2,\dots ,\mathrm{\# }{\mathit{LS}_{2}}(p)\}$, suppose
$\mathrm{\# }{\mathit{LS}_{1}}(p)$ and
$\mathrm{\# }{\mathit{LS}_{2}}(p)$ are the numbers of LTs in
${\mathit{LS}_{1}}(p)$ and
${\mathit{LS}_{2}}(p)$ respectively. If
$\mathrm{\# }{\mathit{LS}_{1}}(p)>\mathrm{\# }{\mathit{LS}_{2}}(p)$, then
$\mathrm{\# }{\mathit{LS}_{1}}(p)-\mathrm{\# }{\mathit{LS}_{2}}(p)$ LTs will be added to
${\mathit{LS}_{2}}(p)$ so that the numbers of their LTs are equal. The added LTs are the smallest ones in
${\mathit{LS}_{2}}(p)$, and their probabilities are zero. About the detailed information, please refer to reference (Pang
et al.,
2016).
By Definitions
2 and
3, the normalized PLTS (NPLTS) are obtained, which are denoted as
${\mathit{LS}^{N}}(p)=\{{\mathit{LS}^{N(k)}}({p^{N(k)}})\mid k=1,2,\dots ,\mathrm{\# }\mathit{LS}(p)\}$, where
${p^{N(k)}}={p^{(k)}}/{\textstyle\sum _{k=1}^{\mathrm{\# }\mathit{LS}(p)}}{p^{(k)}}$ for all
$k=1,2,\dots ,\mathrm{\# }\mathit{LS}(p)$.
2.3 Comparison Between PLTSs
To compare the PLTSs, Pang
et al. (
2016) presented the concept of the score of PLTSs:
Definition 4 (See Pang et al., 2016).
Suppose
$\mathit{LS}(p)=\{{\mathit{LS}^{(k)}}({p^{(k)}})\mid k=1,2,\dots ,\mathrm{\# }\mathit{LS}(p)\}$ is a PLTS, and
${r^{(k)}}$ is the subscript of the LT
${\mathit{LS}^{(k)}}$. Then the score function
$E(\mathit{LS}(p))$ of
$\mathit{LS}(p)$ is given by
Obviously, the score function represents the averaging value of all LTs of a PLTS. Generally, for a given PLTS $\mathit{LS}(p)$, $E(\mathit{LS}(p))$ is an extended LT.
Based on the score function of the PLTS, we define the following relationship between two PLTSs:
Definition 5 (See Pang et al., 2016).
For any two given PLTSs ${\mathit{LS}_{1}}(p)$ and ${\mathit{LS}_{2}}(p)$, if $E({\mathit{LS}_{1}}(p))>E({\mathit{LS}_{2}}(p))$, then the PLTS ${\mathit{LS}_{1}}(p)$ is greater than ${\mathit{LS}_{2}}(p)$.
However, when
$E({\mathit{LS}_{1}}(p))=E({\mathit{LS}_{2}}(p))$, we cannot compare the PLTSs
${\mathit{LS}_{1}}(p)$ and
${\mathit{LS}_{2}}(p)$. Further, in order to process this situation, we can give the following definition.
Definition 6 (See Pang et al., 2016).
For a given PLTs
$\mathit{LS}(p)=\{{\mathit{LS}^{(k)}}({p^{(k)}})\mid k=1,2,\dots ,\mathrm{\# }\mathit{LS}(p)\}$, suppose
${r^{(k)}}$ is the subscript of LT
${\mathit{LS}^{(k)}}$, and
$E(\mathit{LS}(p))={s_{\bar{\alpha }}}$, where
$\bar{\alpha }={\textstyle\sum _{k=1}^{\mathrm{\# }\mathit{LS}(p)}}({r^{(k)}}{p^{(k)}})/{\textstyle\sum _{k=1}^{\mathrm{\# }\mathit{LS}(p)}}{p^{(k)}}$. Then the deviation degree of
$\mathit{LS}(p)$ is:
For any two PLTSs ${\mathit{LS}_{1}}(p)$ and ${\mathit{LS}_{2}}(p)$ with $E({\mathit{LS}_{1}}(p))=E({\mathit{LS}_{2}}(p))$, if $\bar{\sigma }({\mathit{LS}_{1}}(p))>\bar{\sigma }({\mathit{LS}_{2}}(p))$, then ${\mathit{LS}_{1}}(p)<{\mathit{LS}_{2}}(p)$; if $\bar{\sigma }({\mathit{LS}_{1}}(p))=\bar{\sigma }({\mathit{LS}_{2}}(p))$, then ${\mathit{LS}_{1}}(p)$ is indifferent to ${\mathit{LS}_{2}}(p)$, denoted by ${\mathit{LS}_{1}}(p)\approx {\mathit{LS}_{2}}(p)$.
Definition 7 (See Pang et al., 2016).
For any two given PLTSs
${\mathit{LS}_{1}}(p)$ and
${\mathit{LS}_{2}}(p)$, suppose
$\tilde{L}{S_{1}}(p)$ and
$\tilde{L}{S_{2}}(p)$ are the corresponding normalized PLTSs respectively. Then
-
(1) If $E(\tilde{L}{S_{1}}(p))>E(\tilde{L}{S_{2}}(p))$, then ${\mathit{LS}_{1}}(p)>{\mathit{LS}_{2}}(p)$.
-
(2) If $E(\tilde{L}{S_{1}}(p))<E(\tilde{L}{S_{2}}(p))$, then ${\mathit{LS}_{1}}(p)<{\mathit{LS}_{2}}(p)$.
-
(3) If $E(\tilde{L}{S_{1}}(p))=E(\tilde{L}{S_{2}}(p))$, then
-
(i) If $\bar{\sigma }(\tilde{L}{S_{1}}(p))>\bar{\sigma }(\tilde{L}{S_{2}}(p))$, then ${\mathit{LS}_{1}}(p)<{\mathit{LS}_{2}}(p)$.
-
(ii) If $\bar{\sigma }(\tilde{L}{S_{1}}(p))=\bar{\sigma }(\tilde{L}{S_{2}}(p))$, then ${\mathit{LS}_{1}}(p)\approx {\mathit{LS}_{2}}(p)$.
-
(iii) If $\bar{\sigma }(\tilde{L}{S_{1}}(p))<\bar{\sigma }(\tilde{L}{S_{2}}(p))$, then ${\mathit{LS}_{1}}(p)>{\mathit{LS}_{2}}(p)$.
2.4 Ranking PLTSs by Possibility Degree
The PLTS’s possibility degree formula was firstly proposed by Bai
et al. (
2017), and its aim is to rank the PLTs in a more suitable and accurate way. It can solve some existing weaknesses of the other ranking methods. To some extent, it can reduce the loss of the information.
Definition 8 (See Bai et al., 2017).
Let $\mathit{LS}(p)=\{{\mathit{LS}^{(k)}}({p^{(k)}})\mid k=1,2,\dots ,\mathrm{\# }\mathit{LS}(p)\}$ is a PLTS, and ${r^{(k)}}$ is the subscript of the LT ${\mathit{LS}^{(k)}}$. Let ${\mathit{LS}^{-}}=\min ({r^{(k)}})$ and ${\mathit{LS}^{+}}=\max ({r^{(k)}})$ be the lower bound and the upper bound of $\mathit{LS}(p)$, respectively. The $a{(LS)^{-}}$ is the lower area and the $a{(LS)^{+}}$ is the upper area, where $a{(LS)^{-}}=\min ({r^{(k)}})\times {p^{(k)}}$ and $a{(LS)^{+}}=\max ({r^{(k)}})\times {p^{(k)}}$.
Definition 9 (See Bai et al., 2017).
Let
$S=\{{s_{\alpha }}\mid \alpha =-\tau ,\dots ,-1,0,1,\dots ,\tau \}$ be an LTS,
${\mathit{LS}_{1}}(p)$ and
${\mathit{LS}_{2}}(p)$ be two PLTSs. The possibility degree of
${\mathit{LS}_{1}}(p)$ being not less than
${\mathit{LS}_{2}}(p)$ is defined as
where
$a({\mathit{LS}_{1}}\cap {\mathit{LS}_{2}})$ represent the area of the intersection between
${\mathit{LS}_{1}}(p)$ and
${\mathit{LS}_{2}}(p)$.
Definition 10 (See Bai et al., 2017).
If $p({\mathit{LS}_{1}}(p)>{\mathit{LS}_{2}}(p))>p({\mathit{LS}_{2}}(p)>{\mathit{LS}_{1}}(p))$, then ${\mathit{LS}_{1}}(p)$ is superior to ${\mathit{LS}_{2}}(p)$ with the degree of $p({\mathit{LS}_{1}}(p)>{\mathit{LS}_{2}}(p))$, denoted by ${\mathit{LS}_{1}}(p){\succ ^{p({\mathit{LS}_{1}}(p)>{\mathit{LS}_{2}}(p))}}{\mathit{LS}_{2}}(p)$; if $p({\mathit{LS}_{1}}(p)>{\mathit{LS}_{2}}(p))=1$, then ${\mathit{LS}_{1}}(p)$ is absolutely superior to ${\mathit{LS}_{2}}(p)$; if $p({\mathit{LS}_{1}}(p)>{\mathit{LS}_{2}}(p))=0.5$, then ${\mathit{LS}_{1}}(p)$ is indifferent to ${\mathit{LS}_{2}}(p)$, denoted by ${\mathit{LS}_{1}}(p)={\mathit{LS}_{2}}(p)$.
But in the practical experiments, we find this possibility degree formula also has some weaknesses.
The first one is that when the PLTSs only have one LT, this formula cannot give the correct result. For example, we have two PLTSs
${\mathit{LS}_{1}}(p)=\{{s_{2}}(0.3),{s_{3}}(0.7)\}$ and
${\mathit{LS}_{2}}(p)=\{{s_{1}}(1)\}$, and we can see the
${\mathit{LS}_{1}}(p)$ is absolutely superior to the
${\mathit{LS}_{2}}(p)$, so we can get
$p({\mathit{LS}_{1}}(p)>{\mathit{LS}_{2}}(p))=1$. However, according to Eq. (
4), we have
Obviously, this result is counterintuitive.
The second one is that when the two PLTSs have the same upper limit and lower limit (also have the same probability), this formula will lose the data information. For example, we have two PLTs
${\mathit{LS}_{3}}(p)=\{{s_{-1}}(0.3),{s_{2}}(0,3),{s_{3}}(0.4)\}$ and
${\mathit{LS}_{4}}(p)=\{{s_{-1}}(0.3),{s_{0}}(0.3),{s_{3}}(0.4)\}$, according to Eq. (
5), we can get the possibility degree
$p({\mathit{LS}_{1}}(p)>{\mathit{LS}_{2}}(p))=0.5$, but from intuition, we can get the
${\mathit{LS}_{3}}(p)$ is superior to
${\mathit{LS}_{4}}(p)$ because the medium elements of PLTSs are different, the ranking results should also be different.
Based on the above two aspects, we developed the improved possibility degree formula to overcome these defects.
Firstly, when the two PLTs have no common LTs, we divided this into two conditions.
Condition 1.
All the probabilistic linguistic elements (PLEs) and their subscripts in the PLTs are smaller (bigger) than the other one.
Under this condition, we can directly judge the possibility degree of PLTs, usually it has two conditions: if all the PLEs and their subscripts in ${\mathit{LS}_{1}}(p)$ are bigger than ${\mathit{LS}_{2}}(p)$, then $p({\mathit{LS}_{1}}(p)>{\mathit{LS}_{2}}(p))=1$; if all the PLEs linguistic terms subscript in ${\mathit{LS}_{1}}(p)$ are smaller than ${\mathit{LS}_{2}}(p)$, then $p({\mathit{LS}_{1}}(p)>{\mathit{LS}_{2}}(p))=0$.
Condition 2.
When the two PLTs have common PLEs, we can use the following formula to calculate the possibility degree:
In order to make sure all the PLTs have the same number of PLEs, we should normalize the PLTs. In the above formula, the $\mathrm{\# }{\mathit{LS}_{1}}$ and $\mathrm{\# }{\mathit{LS}_{2}}$ are the numbers of all LTs in ${\mathit{LS}_{1}}(p)$ and ${\mathit{LS}_{2}}(p)$, and the $\mathrm{\# }{\mathit{LS}_{1}}$ and $\mathrm{\# }{\mathit{LS}_{2}}$ are equal.
2.5 PROMETHEE II Method
The PROMETHEE method is a multicriteria analysis method that was proposed by Brans and Vincke (
1985) in 1985, including the PROMETHEE I and the PROMETHEE II.
In the PROMETHEE I method, the solution set is sorted by positive flow and negative flow, and it only obtains a partial ranking of the solution set. In PROMETHEE II method, it obtains a complete ranking by a net flow.
Let $M=\{1,2,\dots ,m\}$ and $N=\{1,2,\dots ,n\}$, and suppose the decision matrix $X={[{x_{ij}}]_{m\times n}}$, where ${x_{ij}}$ is the j-th attribute value with respect to the i-th alternative, and then normalize $X={[{x_{ij}}]_{m\times n}}$ into $\tilde{X}={[{\tilde{x}_{ij}}]_{m\times n}}$, ${x_{ij}}$ and ${\tilde{x}_{ij}}$ are all crisp numbers, $i\in M$, $j\in N$.
The procedures of the PROMETHEE II method are showed as follows:
-
Step 1. Determine the weight ${w_{j}}$ of the attribute ${C_{j}}$.
-
Step 2. Utilize a preference function ${p_{j}}({x_{ij}})$ for each attribute ${C_{j}}$. (Select the preference function according to the actual problem.)
-
Step 3. Calculate the multicriterion preference index of the alternative
${x_{i}}$ over the alternative
${x_{k}}$ $(k=1,\dots ,m)$ by using the following expression:
-
Step 4. Calculate the positive flow and negative flow of each alternatives:
-
Step 5. Calculate the net flow of the alternative:
-
Step 6. According to the value of
$\varphi ({x_{i}})$, rank all the alternatives.
3 PL-PROMETHEE II Method
3.1 Determining Weights Based on the Maximum Deviation Method
In the MADM problem, the weight reflects the importance of each attribute. There are many methods which can determine attribute weights, such as expert opinion survey method, AHP method, and so on, but these methods have subjective factors in determining the attribute weights. In order to avoid the influence of subjective factors, we use the maximum deviation method to calculate the object weights of the attributes.
We use the distance to represent the deviation in the maximum deviation method, and Lin and Xu (
2017) provided a series of probabilistic linguistic (PL) distance measure, here we used the normalized Hamming distance measure. Based on this measure, we form a systematic weight calculation method.
The steps are shown as follows:
-
(1) Normalize the PL decision matrix $R={[\mathit{LS}{(p)_{ij}^{}}]_{m\times n}}$;
-
(2) According to the PL distance measure, calculate the distance of $\mathit{LS}{(p)_{ij}^{}}$.
where
${\mathit{LS}_{1}^{(k1)}}({p_{1}^{k1}})\in {\mathit{LS}_{1}}(p)$ and
${\mathit{LS}_{2}^{(k2)}}({p_{2}^{k2}})\in {\mathit{LS}_{2}}(p)$ are two PLTEs,
$I({\mathit{LS}_{1}^{(k1)}})$ and
$I({\mathit{LS}_{2}^{(k2)}})$ are the subscripts of the linguistic terms
${\mathit{LS}_{1}^{(k1)}}$ and
${\mathit{LS}_{2}^{(k2)}}$.
Finally, the weight of each attribute can be gotten as following:
3.2 A Decision Making Method Based on the PL-PROMETHEE II Method
For a MADM problem with PLI, let $A=\{{A_{1}},{A_{2}},\dots ,{A_{m}}\}$ be a finite set of alternatives, $C=\{{C_{1}},{C_{2}},\dots ,{C_{n}}\}$ be the set of attributes and $\omega ={({\omega _{1}},{\omega _{2}},\dots ,{\omega _{n}})^{T}}$ be the weight vector of attributes ${C_{j}}$ $(j=1,2,\dots ,n)$, with ${\omega _{j}}\in [0,1]$, $j=1,2,\dots ,n$ and ${\textstyle\sum _{j=1}^{n}}{\omega _{j}}=1$. Suppose that $R={[\mathit{LS}{(p)_{ij}}]_{m\times n}}$ is the decision matrix, where $\mathit{LS}{(p)_{ij}}=\{{\mathit{LS}_{ij}^{(t)}}({p_{ij}^{(t)}})\mid t=1,2,\dots ,\mathrm{\# }\mathit{LS}{(p)_{ij}}\}$ is a PLTS, which is an evaluation value of alternative ${A_{i}}$ about attribute ${C_{j}}$. Then the goal is to rank the alternatives.
Step 1. Normalize the attribute values.
In real decision making, the attribute values have two types, i.e. cost type and benefit type. In order to eliminate the difference in types, we need to convert them to the same type.
We can convert the cost type to the benefit type, and the transformed decision matrix is expressed by
$R={[\mathit{LS}{(p)_{ij}}]_{m\times n}}$, where
Then, according to the Definitions
2 and
3, to normalize the PL decision matrix.
Step 2. Determine the weight
${w_{j}}$ of the attribute
${C_{j}}$ with the maximum deviation method by formulas (
10)–(
12).
Step 3. Calculate the multicriterion preference index of the alternative
${A_{i}}$ over the alternative
${A_{k}}$ $(k=1,\dots ,m)$ by the following expression:
Step 4. Calculate the positive flow and negative flow of each alternative:
Step 5. Calculate the net flow of each alternative:
Step 6. Rank ${\mathit{LS}_{i}}$ $(i=1,2,\dots ,m)$ according to the value of $\varphi ({A_{i}})$.
Step 7. End.