1 Introduction
With the development of mobile internet and “Internet +”, sharing economy, such as traffic-sharing, health care-sharing and food-sharing, has boomed in recent years. Car-sharing, a new mode of traffic-sharing, is becoming more and more popular along with the development of tourism. People usually complete the car rental and provide their feedbacks on consumer experiences on sharing platforms. Therefore, how to select a suitable car sharing platform is important for tourists. Since the car sharing platforms are often evaluated in safety, convenience, the brand of car and so on, the selection of car sharing platform can be described as a kind of multi-attribute decision making problems. Ordinarily, several tourists travel together. Hence, more than one tourist (decision maker, shorted by DM) decides which car sharing platform is selected. Thereby, the car sharing platform selection can be considered as a kind of multi-attribute group decision making (MAGDM) (Xu
et al.,
2016; Wan
et al.,
2015; Dong
et al.,
2018; Kou
et al.,
2020; Zhang
et al.,
2019).
Owing to the complexity of problems and vagueness of human thinking, it is difficult for DMs to describe the decision information as crisp numbers. DMs are more apt to use linguistic variables (Zadeh,
1975) to evaluate alternatives. However, the linguistic variable allows DMs to express decision information with only one linguistic term (LT). Sometimes, DMs cannot adequately describe their evaluations using one exact LT and have hesitancy among several LTs. To overcome this drawback of the linguistic variable, Rodríguez
et al. (
2012) introduced the hesitant fuzzy linguistic term set (HFLTS) which permits DMs to express their preferences on alternatives with several possible LTs. Nevertheless, all possible LTs in a HFLTS have the same importance. In fact, DMs may prefer a LT to others. Thus, some useful information may be lost (Pang
et al.,
2016). To make up for this limitation, Pang
et al. (
2016) generalized HFLTS and presented the probabilistic linguistic term set (PLTS) in which each possible LT is assigned a probability (weight). Hence, the PLTS contains more useful information compared with the HFLTS. In recent years, the PLTS has gained more attention and some research achievements have been obtained. These achievements can be roughly divided into four categories.
(1) Operations and aggregation operators of PLTSs. Operations of PLTSs and aggregation operators are important to aggregate decision information in the PLTS circumstance. Pang
et al. (
2016) firstly defined some operational laws of PLTSs, and then put forward PLWA and PLWG operators of PLTSs, respectively. Subsequently, Zhang
et al. (
2017) pointed that the operation result by the laws (Pang
et al.,
2016) is a linguistic term rather than a PLTS. To remedy this defect, Zhang
et al. (
2017) defined new operations of PLTSs. Nevertheless, the operation results by these new operations in Zhang
et al. (
2017) may exceed the bounds of LTSs. To overcome this limitation, Gou and Xu (
2016) introduced novel operations of PLTSs by a linguistic scale function. Later, using this function, Mao
et al. (
2019) introduced new operation laws and developed the GPLHWA operator and GPLHOWA operator based on Archimedean t-norms and s-norms. Zhang (
2018) investigated the large group decision making and proposed a PL-WAA operator. Recently, diverse types of operators, such as power operators (Liu
et al.,
2019) and dependent weighted average operators (Liu
et al.,
2019), have been proposed one after another. Furthermore, Mi
et al. (
2020) conducted a survey of existing achievements on operations of PLTSs and aggregation operators.
(2) Distance measure of PLTSs. Pang
et al. (
2016) and Zhang
et al. (
2016) defined Euclidean distance and Hamming distance measures, respectively, of PLTSs based on the probabilities and subscripts of possible LTs. Afterwards, Wu and Liao (
2018) argued that the results derived by these two distance measures (Pang
et al.,
2016; Zhang
et al.,
2016) occasionally are against human intuition. To overcome this defect, they proposed an improved distance measure. However, the calculation of this improved distance measure is very complex. Lately, Mao
et al. (
2019) proposed a simple Euclidean distance measure by a linguistic scale function (Gou and Xu,
2016). Unfortunately, some counterintuitive results still appear based on this Euclidean distance, please see Table
2 in Section
3.2.
(3) Entropy and cross entropy of PLTSs. Due to the uncertainty of information with PLTSs, how to measure such a certainty is important. Entropy of PLTSs, as an efficient tool to measure the uncertainty of information, is an interesting topic but has not gained wide attention. Only Lin
et al. (
2019) developed an information entropy of PLTSs by probabilities of possible LTs in a PLTS. Liu
et al. (
2018) proposed three kinds of entropy of PLTSs by extending the entropy of HFSs into the PLTS context, including the fuzzy entropy, hesitancy entropy and total entropy. On the other hand, the cross entropy is effective for measuring the differences between PLTSs. Up to now, only Liu and Teng (
2019) presented two cross entropy measures of PLTSs based on sine and tangent trigonometric functions.
(4) Decision methods for solving MAGDM problems with PLTSs. It is important to choose proper decision methods for selecting best alternatives. At present, pools of methods have been proposed to solve MAGDM problems with PLTSs. For example, Pang
et al. (
2016) presented a distance-based extended TOPSIS method with PLTSs. Zhang X.F.
et al. (
2019) argued that this extended TOPSIS only considered the distance proximity of alternatives with respect to ideal and negative ideal solutions, while ignored the proximity in direction. In response to this problem, Zhang X.F.
et al. (
2019) developed a projection method to solve MAGDM with PLTSs. Later, by improving the classical MULTIMOORA method in different angles, Wu
et al. (
2018) and Liu and Li (
2019) developed a PL-MULTIMOORA method and an extended MULTIMOORA method, respectively. Unfortunately, these two methods did not take negative ideal solutions into account. Considering both positive and negative ideal solutions, Li
et al. (
2020) put forward a PP-MULTIMOORA method. In addition, diverse types of outranking methods (Lin
et al.,
2019; Liao
et al.,
2019; Xu
et al.,
2019; Peng
et al.,
2020; Liu and Li,
2018), such as ELECTRE and PROMETHEE, are also presented. To facilitate the study on decision methods of PLTSs, Liao
et al. (
2020) provided a survey which includes most decision methods and their applications.
Although many achievements have been achieved, they suffer from some limitations.
(1) Though the GPLHWA operator (Mao
et al.,
2019) improved the PLWA operator (Pang
et al.,
2016) and the PL-WAA operator (Zhang
et al.,
2017) to some extent, it has some flaws that the aggregated result obtained by the GPLHWA operator is not a PLTS in a strict sense. The reason is that the sum of probabilities of all possible LTs in the aggregated result is more than 1. Therefore, proposing new aggregation operators with desirable properties is helpful for aggregating evaluation values of alternatives.
(2) By existing distance measures of PLTSs (Pang
et al.,
2016; Mao
et al.,
2019; Zhang
et al.,
2016), some counter-intuitive results appears. Although the results derived by the distance measure (Wu and Liao,
2018) almost agree with human intuition, the computation of this distance measure is too complex to use easily. Therefore, it is necessary to develop a new simple distance measure of PLTSs by which computation results coincide with human intuition.
(3) The research on entropy and cross entropy of PLTSs is very little. Up to date, only Lin
et al. (
2019) and Liu
et al. (
2018) addressed the entropy of PLTSs. However, the distinguishing power of the entropy proposed by Lin
et al. (
2019) is not high enough. Although the fuzzy entropy and hesitancy entropy proposed by Liu
et al. (
2018) can neatly measure the uncertainty of PLTSs, some properties of this fuzzy entropy are counter-intuitive and the computation of this hesitancy entropy is not simple. In addition, the cross entropy proposed by Liu and Teng (
2019) fails for the symmetric linguistic term sets in PLTSs. Thereby, it is valuable to study deeply on the entropy and cross entropy of PLTSs.
(4) Existing decision methods are fruitful to solve decision problems with PLTSs. Nevertheless, some methods (Lin
et al.,
2019; Xu
et al.,
2019; Li
et al.,
2020) can only solve MADM problems, but fail for MAGDM problems. Although these methods (Pang
et al.,
2016; Wu
et al.,
2018; Zhang X.F.
et al.,
2019; Liu
et al.,
2019) are capable to solve MAGDM problems with PLTSs, DMs’ weights or attribute weights are not considered or assigned in advance, which may result in subjective random weights. Hence, it is interesting to seek a novel method which not only can solve MAGDM problems, but also determine DMs’ weights and attribute weights objectively.
To make up for above limitations, this paper proposes a novel method for solving MAGDM problems with PLTSs. First, two aggregation operators, including PLWAM (probabilistic linguistic weighted arithmetic mean) operator and PLWGM (probabilistic linguistic weighted geometric mean) operator, respectively, are proposed and some desirable properties are studied. To measure the hesitancy degree of a PLTS, a hesitancy index of the PLTS is introduced. Then a general distance measure of PLTSs is defined to measure the deviation between two PLTSs. Considering the fact that the uncertainty of a PLTS includes the fuzziness and hesitancy, a fuzzy entropy and hesitancy entropy of a PLTS are defined and then a total entropy of a PLTS is derived to measure such uncertainties. Meanwhile, a cross entropy of PLTSs is defined to measure the distinction between PLTSs. Afterwards, by minimizing the total entropy and the cross entropy of a DM, an objective program model is built to determine DMs’ weights. Subsequently, individual decision matrices are aggregated into a collective one by the PLWAM operator. To derive attributes weights objectively, a bi-objective program is built by minimizing the total entropy of attribute values with respect to each attribute as well as maximizing the cross entropy between attribute values of alternatives. By defining a new preference function in the form of PLTSs, an improved PL-PROMETHEE method is developed to rank alternatives. Thereby, a novel method is proposed for solving MAGDM problems with PLTSs. A case of car sharing platform selection is applied to show the effectiveness and advantages of the proposed method at length. The primary features of the proposed method are outlined as follows:
(1) Two new probabilistic linguistic average aggregation operators of PLTSs (i.e. PLWAM and PLWAG operators) are proposed. A prominent characteristic of them is that the aggregated result obtained by these two operators is not only a PLTS with the sum of probabilities of possible LTs being equal to 1, but also is consistent with human intuition.
(2) A new generalized distance measure of PLTSs is defined. It is worth mentioning that the hesitancy degree of a PLTS is considered in this distance measure. Thus, the new distance has a stronger distinguishing power. Moreover, a ranking approach is presented to rank PLTSs.
(3) A fuzzy entropy, a hesitancy entropy and a cross entropy of PLTSs are introduced. The fuzzy entropy has desirable properties and the computation of the hesitancy entropy is simple. Meanwhile, the cross entropy can distinguish the deviations between PLTSs with symmetric linguistic term sets.
(4) Based on entropy and cross entropy of PLTSs, distinct objective programs are established to determine DMs’ weights and attribute weights objectively. Finally, an improved PL-PROMETHEE method is developed to rank alternatives.
The remainder of this paper is organized as follows: In Section
2, some basic concepts of PLTSs are reviewed. Moreover, PLWAM and PLWGM operators are proposed. Section
3 introduces a hesitancy index of a PLTS and then develops a generalized distance measure of PLTSs. Based on this distance measure, a ranking approach is presented to sort PLTSs. Section
4 defines several types of entropy of PLTSs, including the fuzzy entropy, hesitancy entropy, total entropy and cross entropy of PLTSs. In Section
5, an improved PL-PROMETHEE method is developed to solve MAGDM problems with PLTSs. Section
6 provides a case study of car sharing platform selection to illustrate the application of the proposed method. Furthermore, comparison analyses are conducted to show advantages of the proposed method. Some conclusions are made in Section
7.
5 A Novel Method for MAGDM with PLTSs
In this section, we first describe the problems of MAGDM with PLTSs, and then propose a novel method for solving such problems.
5.1 Problem Description
Let $A=\{{A_{1}},{A_{2}},\dots ,{A_{m}}\}$ be the set of m feasible alternatives, $U=\{{u_{1}},{u_{2}},\dots ,{u_{n}}\}$ be the set of attributes whose weights are $\boldsymbol{W}={({w_{1}},{w_{2}},\dots ,{w_{n}})^{\mathrm{T}}}$ with ${\textstyle\sum _{j=1}^{n}}{w_{j}}=1$, and $E=\{{e_{1}},{e_{2}},\dots ,{e_{t}}\}$ be the set of DMs whose weights are $\boldsymbol{\lambda }={({\lambda _{1}},{\lambda _{2}},\dots ,{\lambda _{t}})^{\mathrm{T}}}$ satisfying ${\textstyle\sum _{k=1}^{t}}{\lambda _{k}}=1$. DM ${e_{k}}$ ($k=1,2,\dots ,t$) provides his/her evaluations on alternative ${A_{i}}$ with respect to attribute ${u_{j}}$ is in the form of PLTS ${L_{ij}^{k}}(p)$ ($i=1$, $2,\dots ,m;j=1$, $2,\dots ,n;k=1,2,\dots ,t$) based on the linguistic term set $S=\{{s_{\alpha }}|\alpha =-\tau ,\dots ,-1,0,1,\dots ,\tau \}$. All evaluations construct the decision matrices ${\boldsymbol{U}_{k}}={({L_{ij}^{k}}(p))_{m\times n}}$ $(k=1,2,\dots ,t)$. Denote the normalized ordered decision matrices by ${\bar{\boldsymbol{U}}_{k}}={({\bar{L}_{ij}^{k}}(p))_{m\times n}}$ $(k=1,2,\dots ,t)$.
5.2 Determine DMs’ Weights Based on the Total Entropy and the Symmetric Cross Entropy
In this subsection, an approach is developed to determine DMs’ weights objectively by using the proposed total entropy and symmetric cross entropy of PLTSs. As we know, the less uncertainty (i.e. a smaller total entropy) of an individual matrix provided by a DM, the better the quality of decision information reflected by this matrix is. Thus, the bigger the weight of this DM should be assigned. In virtue of this criterion, a programming model for deriving DMs’ weights is built by minimizing the total entropy of decision matrices, i.e.
where
${E_{T}}({\bar{\boldsymbol{U}}_{k}})$ are the total entropy of
${\bar{\boldsymbol{U}}_{k}}$ $(k=1,2,\dots ,t)$.
Solving Eq. (
27) with Lagrange multiplier approach, it is obtained as
Normalizing
${\lambda _{1}^{\ast k}}$, one has
On the other hand, a closer degree of a DM to other DMs means that the information supplied by this DM is much closer to that of the group. In this case, this DM should be assigned a larger weight. From this viewpoint, another optimal model for determining DMs’ weights is constructed based on the symmetric cross entropy as
Eq. (
30) can also be solved with Lagrange multiplier approach, and the optimal solutions are derived as
Normalizing
${\lambda _{2}^{\ast k}}$, one gets
Combining Eq. (
29) and Eq. (
32), the ultimate DMs’ weights
${\lambda _{k}}$ are determined as
where
$\beta \in [0,1]$ is a compromise coefficient.
5.3 Construct Bi-Objective Programs For Deriving Attribute Weights
In group decision making process, attribute weights play an important role because different attribute weights may result in diverse ranking of alternatives. In this subsection, considering the information of attribute weights is completely unknown or partially known, two bi-objective programs are constructed respectively for deriving attribute weights.
(i) Aggregating individual decision matrices into a collective one
By employing DMs’ weights determined in Section
5.2, individual decision matrices are integrated into a collective one with the proposed PLWAM operator. Denote the collective decision matrix by
$\boldsymbol{U}={({L_{ij}}(p))_{m\times n}}$, where
Then, the normalized ordered collective decision matrix
$\bar{\boldsymbol{U}}={({\bar{L}_{ij}}(p))_{m\times n}}$ is obtained from
$\boldsymbol{U}={({L_{ij}}(p))_{m\times n}}$ by Definition
4, where
${\bar{L}_{ij}}(p)=\{{\bar{L}_{ij}^{(k)}}({\bar{p}_{ij}^{(k)}})|{\bar{L}_{ij}^{(k)}}\in S,\hspace{2.5pt}{\bar{p}_{ij}^{(k)}}\geqslant 0,\hspace{2.5pt}k=1,2,\dots ,\mathrm{\# }{\bar{L}_{ij}}(p)\}$.
(ii) Constructing bi-objective programs for deriving attribute weights
It is known to us that an attribute plays a more important role if the performance values of alternatives on it have distinct differences. Thus, such an attribute should be given a bigger weight. Conversely, if the evaluation values of alternatives with respect to a certain attribute have little difference, this attribute should be given a smaller weight. In addition, the credibility of decision information on an attribute should also be taken into account while determining attribute weights. The more credibility of evaluation values on an attribute, the bigger weight of this attribute should be assessed. As mentioned before, the symmetric cross entropy and the total entropy can reflect the differences between alternatives and the credibility of evaluation values, respectively. Keeping this idea in mind, we can determine attribute weights by maximizing the symmetric cross entropy as well as minimizing the total entropy. Thus, a bi-objective program is established if the information of attribute weights is completely unknown, i.e.
As
$0\leqslant {E_{T}}({\bar{L}_{ij}}(p))\leqslant 1$, Eq. (
35) can be converted into the following single objective program:
Solving Eq. (
36) with Lagrange multiplier approach, the weights of attributes are derived as
The normalized weights
${w_{j}}$ are obtained by normalizing
${w_{j}^{\ast }}$, i.e.
Similarly, for the situations where the information of attribute weights is incomplete, another programming model is obtained by modifying Eq. (
36), i.e.
where Ω represents the incomplete attribute weight information. Solving Eq. (
39) with Lingo software, the corresponding optimal solution vector
${\boldsymbol{w}^{\ast }}={({w_{1}^{\ast }},{w_{2}^{\ast }},\dots ,{w_{2}^{\ast }})^{\mathrm{T}}}$ is generated and then normalized as the attribute weight vector
$\boldsymbol{w}={({w_{1}},{w_{2}},\dots ,{w_{n}})^{\mathrm{T}}}$.
5.4 Ranking Alternatives by an Improved PL-PROMETHEE Method
The PROMETHEE method is a popular outranking method in decision making. It has been extended to different fuzzy environments, such as intuitionistic fuzzy sets (Krishankumar
et al.,
2017) and linguistic variables Halouani
et al. (
2009). In the PLTS environment, although Xu
et al. (
2019) proposed a PL-PROMETHEE method, the distinguishing power of PL-PROMETHEE is not high, which can be verified by Example
3 in the sequel. To improve the distinguishing power, this subsection constructs new probabilistic linguistic preference relations, the integrated preference index and positive/negative outranking flows to adapt the main framework of PROMETHEE in the PLTS context. Thus, an improved PL-PROMETHEE method is proposed to solve MAGDM problems with PLTSs.
(i) The probabilistic linguistic preference function
Suppose that
${A_{i}}$ and
${A_{r}}$ are two alternatives in the alternative set.
${\bar{L}_{ij}}(p)$ and
${\bar{L}_{rj}}(p)$ are respectively normalized ordered attribute values of
${A_{i}}$ and
${A_{r}}$ with respect to attribute
${u_{j}}$. Preference function
${P_{X}^{j}}({A_{i}},{A_{r}})$ of
${\bar{L}_{ij}}(p)$ with respect to
${\bar{L}_{rj}}(p)$ is defined as
where
In Eq. (
41), parameters
$p(\geqslant 0)$ and
$q(\geqslant 0)$ are the indifference threshold and the strict preference threshold, respectively. Functions
$g(x)$ and
${g^{-1}}(x)$ are defined in Definition
2.
Obviously, the preference function
${P_{X}^{j}}({A_{i}},{A_{r}})$ constructed by Eqs. (
40) and (
41) is a PLTS.
(ii) The integrated preference index
According to the probabilistic linguistic preference index
${P_{X}^{j}}({A_{i}},{A_{r}})$, an integrated preference index is defined as
where
$T({P_{j}}({A_{i}},{A_{r}}))$ is the closeness degrees of
${P_{X}^{j}}({A_{i}},{A_{r}})$ and computed by Eq. (
16).
(iii) The positive and negative outranking flows
Employing the integrated preference indices, the positive/negative outranking flows,
${\phi ^{+}}({A_{i}})$ and
${\phi ^{-}}({A_{i}})$, are defined as follows.
(iv) Ranking alternatives.
In virtue of Eqs. (
43) and (
44), the net flow of alternative
${A_{i}}$ is calculated as
Finally, alternatives are ranked by the descending order of net flows.
Remark 7.
In PL-PROMETHEE method (Xu
et al.,
2019), the preference function and the total preference index are defined respectively as
and
where
$P({A_{i,j}}>{A_{r,j}})$ is the probability of
${A_{i}}$ preferred to
${A_{r}}$ with respect to
${u_{j}}$.
Obviously, the value of the preference function
${P_{j}}({A_{i}},{A_{r}})$ in Eq. (
46) only takes 0 or 1, which may result in the loss of decision information. However, the preference function
${P_{X}^{j}}({A_{i}},{A_{r}})$ in Eq. (
40) is described by the PLTS which is more flexible than the crisp number for representing decision information. Therefore, compared with PL-PROMETHEE, the distinguishing power of the improved PL-PROMETHEE is stronger and the sensitivity to the ratings of alternatives on attributes is increased. To illustrate this advantage, Example
3 is given below.
Example 3.
For convenience, we only consider one of attributes in a real problem. Suppose that the ratings of three alternatives on this attribute are
${L_{1}}(p)=\{{s_{2}}(0.35),{s_{3}}(0.25),{s_{4}}(0.40)\}$,
${L_{2}}(p)=\{{s_{-3}}(0.70),{s_{-2}}(0.30)\}$ and
${L_{3}}(p)=\{{s_{1}}(0.40),{s_{2}}(0.25),{s_{4}}(0.35)\}$, respectively. Intuitively, the relation
${L_{1}}(p)\succ {L_{3}}(p)\succ {L_{2}}(p)$ holds, where the symbol “≻” means “preferred to”. Hence, it is deduced that the degree of
${L_{1}}(p)$ preferred to
${L_{2}}(p)$ should be more than that of
${L_{1}}(p)$ preferred to
${L_{3}}(p)$. In fact, take
$q=0$ and
$p=0.5$ in the proposed preference function (i.e. Eq (
41)), the preference values are calculated as
${P_{X}^{1}}({A_{1}},{A_{2}})=\{{s_{-1.9}}(0.175),{s_{-1.7}}(0.075),{s_{-1.2}}(0.245),{s_{-1}}(0.105),{s_{1}}(0.280),{s_{1.2}}(0.120)\}$ and
${P_{X}^{1}}({A_{1}},{A_{3}})=\{{s_{-4}}(0.31),{s_{-3.8}}(0.140),{s_{-3.5}}(0.063),{s_{-3.2}}(0.14),{s_{-2.8}}(0.088),{s_{-1.6}}(0.160),{s_{-0.6}}(0.100)\}$. The preference index is calculated as
${\pi _{X}}({A_{1}},{A_{2}})=0.455>{\pi _{X}}({A_{1}},{A_{3}})=0.306$. This result is in line with human intuition. However, using the preference function and preference index in method (Xu
et al.,
2019) (i.e. Eqs. (
46) and (
47)), one obtains
${P_{1}}({A_{1}},{A_{2}})={P_{1}}({A_{1}},{A_{3}})=1$ and
${\pi _{12}}={\pi _{13}}=1$, which are not consistent with the above analysis. In other words, method (Xu
et al.,
2019) has no ability to distinguish alternatives
${A_{2}}$ and
${A_{3}}$. Therefore, the improved PL-PROMETHEE method has a stronger distinguishing power than the method (Xu
et al.,
2019). Furthermore, the sensitivity of the improved PL-PROMETHEE method is strong, too. For example, when
${L_{2}}(p)$ is increased to
${L^{\prime }_{2}}(p)=\{{s_{2}}(0.80),{s_{3}}(0.20)\}$ while
${L_{1}}(p)$ and
${L_{3}}(p)$ remain unchanged, the corresponding preference index
${\pi _{X}}({A_{1}},{A_{2}})$ decreases to 0.271 from 0.455, whereas
${\pi _{12}}$ is not changed.
5.5 A Novel Method for MAGDM with Probabilistic Linguistic Information
A novel method is generated for MAGDM problems with PLTSs. The main procedure of this method is outlined below.
Step 1. Each DM establishes his/her individual probabilistic linguistic matrices
${\boldsymbol{U}_{k}}={({L_{ij}^{k}}(p))_{m\times n}}$ (
$k=1,2,\dots ,t$). Further, matrices
${\boldsymbol{U}_{k}}$ are transformed into corresponding normalized ordered matrices
${\bar{\boldsymbol{U}}_{k}}={({\bar{L}_{ij}^{k}}(p))_{m\times n}}$ $(k=1,2,\dots ,t)$ by Definitions
4-
6.
Step 2. Calculate the total entropy
${E_{T}}({\bar{\boldsymbol{U}}_{k}})$ (
$k=1,2,\dots ,t$) and the symmetric cross entropy
$D({\bar{\boldsymbol{U}}_{k}},{\bar{\boldsymbol{U}}_{\delta }})$ (
$k,\delta =1,2,\dots ,t;k\ne \delta $) by Eqs. (
18), (
19), (
21), (
24) and (
25).
Step 3. Determine the weight vector of DMs
$\boldsymbol{\lambda }={({\lambda _{1}},{\lambda _{2}},\dots ,{\lambda _{t}})^{\mathrm{T}}}$ by Eqs. (
29), (
32) and (
33).
Step 4. Aggregate all matrices
${\bar{\boldsymbol{U}}_{k}}={({L_{ij}^{k}}(p))_{m\times n}}$ (
$k=1,2,\dots ,t$) into a collective one
$\boldsymbol{U}={({L_{ij}}(p))_{m\times n}}$ by the PLWAM operator (Eq. (
4)), and then transform matrix
$\boldsymbol{U}$ into a normalized ordered matrix
$\bar{\boldsymbol{U}}={({\bar{L}_{ij}}(p))_{m\times n}}$.
Step 5. Calculate the entropy
${E_{T}}({\bar{L}_{ij}}(p))$ (
$i=1$,
$2,\dots ,m;j=1,2,\dots ,n$) and the symmetric cross entropy
$D({\bar{L}_{ij}}(p),{\bar{L}_{rj}}(p))$ (
$i,r=1,2,\dots ,m$;
$r\ne i$;
$j=1,2,\dots ,n$) by Eqs. (
18), (
19), (
21), (
24) and (
25).
Step 6. Determine the weight vector of attributes
$\boldsymbol{w}={({w_{1}},{w_{2}},\dots ,{w_{n}})^{\mathrm{T}}}$ by Eq. (
38) or (
39).
Step 7. Compute the preference function values
${P_{X}^{j}}({A_{i}},{A_{r}})$ (
$i,r=1,2,\dots ,m$;
$r\ne i$;
$j=1,2,\dots ,n$) by Eqs. (
40) and (
41).
Step 8. Obtain the total preference index matrix
$\boldsymbol{\pi }={({\pi _{X}}({A_{i}},{A_{r}}))_{m\times m}}$ by Eq. (
42).
Step 9. Calculate the positive and negative flows
${\phi ^{+}}({A_{i}})$ and
${\phi ^{-}}({A_{i}})$ (
$i=1,2,\dots ,m$) based on Eqs. (
43) and (
44).
Step 10. Determine net flows
$\phi ({A_{i}})$ (
$i=1,2,\dots ,m$) by Eq. (
45). Alternatives are ranked based on the descending orders of
$\phi ({A_{i}})$ (
$i=1,2,\dots ,m$).