Informatica logo


Login Register

  1. Home
  2. Issues
  3. Volume 31, Issue 3 (2020)
  4. An Entropy-Based Method for Probabilisti ...

Informatica

Information Submit your article For Referees Help ATTENTION!
  • Article info
  • Full article
  • Related articles
  • Cited by
  • More
    Article info Full article Related articles Cited by

An Entropy-Based Method for Probabilistic Linguistic Group Decision Making and its Application of Selecting Car Sharing Platforms
Volume 31, Issue 3 (2020), pp. 621–658
Gai-li Xu   Shu-Ping Wan   Jiu-Ying Dong  

Authors

 
Placeholder
https://doi.org/10.15388/20-INFOR423
Pub. online: 15 July 2020      Type: Research Article      Open accessOpen Access

Received
1 February 2020
Accepted
1 June 2020
Published
15 July 2020

Abstract

As the tourism and mobile internet develop, car sharing is becoming more and more popular. How to select an appropriate car sharing platform is an important issue to tourists. The car sharing platform selection can be regarded as a kind of multi-attribute group decision making (MAGDM) problems. The probabilistic linguistic term set (PLTS) is a powerful tool to express tourists’ evaluations in the car sharing platform selection. This paper develops a probabilistic linguistic group decision making method for selecting a suitable car sharing platform. First, two aggregation operators of PLTSs are proposed. Subsequently, a fuzzy entropy and a hesitancy entropy of a PLTS are developed to measure the fuzziness and hesitancy of a PLTS, respectively. Combining the fuzzy entropy and hesitancy entropy, a total entropy of a PLTS is generated. Furthermore, a cross entropy between PLTSs is proposed as well. Using the total entropy and cross entropy, DMs’ weights and attribute weights are determined, respectively. By defining preference functions with PLTSs, an improved PL-PROMETHEE approach is developed to rank alternatives. Thereby, a novel method is proposed for solving MAGDM with PLTSs. A car sharing platform selection is examined at length to show the application and superiority of the proposed method.

1 Introduction

With the development of mobile internet and “Internet +”, sharing economy, such as traffic-sharing, health care-sharing and food-sharing, has boomed in recent years. Car-sharing, a new mode of traffic-sharing, is becoming more and more popular along with the development of tourism. People usually complete the car rental and provide their feedbacks on consumer experiences on sharing platforms. Therefore, how to select a suitable car sharing platform is important for tourists. Since the car sharing platforms are often evaluated in safety, convenience, the brand of car and so on, the selection of car sharing platform can be described as a kind of multi-attribute decision making problems. Ordinarily, several tourists travel together. Hence, more than one tourist (decision maker, shorted by DM) decides which car sharing platform is selected. Thereby, the car sharing platform selection can be considered as a kind of multi-attribute group decision making (MAGDM) (Xu et al., 2016; Wan et al., 2015; Dong et al., 2018; Kou et al., 2020; Zhang et al., 2019).
Owing to the complexity of problems and vagueness of human thinking, it is difficult for DMs to describe the decision information as crisp numbers. DMs are more apt to use linguistic variables (Zadeh, 1975) to evaluate alternatives. However, the linguistic variable allows DMs to express decision information with only one linguistic term (LT). Sometimes, DMs cannot adequately describe their evaluations using one exact LT and have hesitancy among several LTs. To overcome this drawback of the linguistic variable, Rodríguez et al. (2012) introduced the hesitant fuzzy linguistic term set (HFLTS) which permits DMs to express their preferences on alternatives with several possible LTs. Nevertheless, all possible LTs in a HFLTS have the same importance. In fact, DMs may prefer a LT to others. Thus, some useful information may be lost (Pang et al., 2016). To make up for this limitation, Pang et al. (2016) generalized HFLTS and presented the probabilistic linguistic term set (PLTS) in which each possible LT is assigned a probability (weight). Hence, the PLTS contains more useful information compared with the HFLTS. In recent years, the PLTS has gained more attention and some research achievements have been obtained. These achievements can be roughly divided into four categories.
(1) Operations and aggregation operators of PLTSs. Operations of PLTSs and aggregation operators are important to aggregate decision information in the PLTS circumstance. Pang et al. (2016) firstly defined some operational laws of PLTSs, and then put forward PLWA and PLWG operators of PLTSs, respectively. Subsequently, Zhang et al. (2017) pointed that the operation result by the laws (Pang et al., 2016) is a linguistic term rather than a PLTS. To remedy this defect, Zhang et al. (2017) defined new operations of PLTSs. Nevertheless, the operation results by these new operations in Zhang et al. (2017) may exceed the bounds of LTSs. To overcome this limitation, Gou and Xu (2016) introduced novel operations of PLTSs by a linguistic scale function. Later, using this function, Mao et al. (2019) introduced new operation laws and developed the GPLHWA operator and GPLHOWA operator based on Archimedean t-norms and s-norms. Zhang (2018) investigated the large group decision making and proposed a PL-WAA operator. Recently, diverse types of operators, such as power operators (Liu et al., 2019) and dependent weighted average operators (Liu et al., 2019), have been proposed one after another. Furthermore, Mi et al. (2020) conducted a survey of existing achievements on operations of PLTSs and aggregation operators.
(2) Distance measure of PLTSs. Pang et al. (2016) and Zhang et al. (2016) defined Euclidean distance and Hamming distance measures, respectively, of PLTSs based on the probabilities and subscripts of possible LTs. Afterwards, Wu and Liao (2018) argued that the results derived by these two distance measures (Pang et al., 2016; Zhang et al., 2016) occasionally are against human intuition. To overcome this defect, they proposed an improved distance measure. However, the calculation of this improved distance measure is very complex. Lately, Mao et al. (2019) proposed a simple Euclidean distance measure by a linguistic scale function (Gou and Xu, 2016). Unfortunately, some counterintuitive results still appear based on this Euclidean distance, please see Table 2 in Section 3.2.
(3) Entropy and cross entropy of PLTSs. Due to the uncertainty of information with PLTSs, how to measure such a certainty is important. Entropy of PLTSs, as an efficient tool to measure the uncertainty of information, is an interesting topic but has not gained wide attention. Only Lin et al. (2019) developed an information entropy of PLTSs by probabilities of possible LTs in a PLTS. Liu et al. (2018) proposed three kinds of entropy of PLTSs by extending the entropy of HFSs into the PLTS context, including the fuzzy entropy, hesitancy entropy and total entropy. On the other hand, the cross entropy is effective for measuring the differences between PLTSs. Up to now, only Liu and Teng (2019) presented two cross entropy measures of PLTSs based on sine and tangent trigonometric functions.
(4) Decision methods for solving MAGDM problems with PLTSs. It is important to choose proper decision methods for selecting best alternatives. At present, pools of methods have been proposed to solve MAGDM problems with PLTSs. For example, Pang et al. (2016) presented a distance-based extended TOPSIS method with PLTSs. Zhang X.F. et al. (2019) argued that this extended TOPSIS only considered the distance proximity of alternatives with respect to ideal and negative ideal solutions, while ignored the proximity in direction. In response to this problem, Zhang X.F. et al. (2019) developed a projection method to solve MAGDM with PLTSs. Later, by improving the classical MULTIMOORA method in different angles, Wu et al. (2018) and Liu and Li (2019) developed a PL-MULTIMOORA method and an extended MULTIMOORA method, respectively. Unfortunately, these two methods did not take negative ideal solutions into account. Considering both positive and negative ideal solutions, Li et al. (2020) put forward a PP-MULTIMOORA method. In addition, diverse types of outranking methods (Lin et al., 2019; Liao et al., 2019; Xu et al., 2019; Peng et al., 2020; Liu and Li, 2018), such as ELECTRE and PROMETHEE, are also presented. To facilitate the study on decision methods of PLTSs, Liao et al. (2020) provided a survey which includes most decision methods and their applications.
Although many achievements have been achieved, they suffer from some limitations.
(1) Though the GPLHWA operator (Mao et al., 2019) improved the PLWA operator (Pang et al., 2016) and the PL-WAA operator (Zhang et al., 2017) to some extent, it has some flaws that the aggregated result obtained by the GPLHWA operator is not a PLTS in a strict sense. The reason is that the sum of probabilities of all possible LTs in the aggregated result is more than 1. Therefore, proposing new aggregation operators with desirable properties is helpful for aggregating evaluation values of alternatives.
(2) By existing distance measures of PLTSs (Pang et al., 2016; Mao et al., 2019; Zhang et al., 2016), some counter-intuitive results appears. Although the results derived by the distance measure (Wu and Liao, 2018) almost agree with human intuition, the computation of this distance measure is too complex to use easily. Therefore, it is necessary to develop a new simple distance measure of PLTSs by which computation results coincide with human intuition.
(3) The research on entropy and cross entropy of PLTSs is very little. Up to date, only Lin et al. (2019) and Liu et al. (2018) addressed the entropy of PLTSs. However, the distinguishing power of the entropy proposed by Lin et al. (2019) is not high enough. Although the fuzzy entropy and hesitancy entropy proposed by Liu et al. (2018) can neatly measure the uncertainty of PLTSs, some properties of this fuzzy entropy are counter-intuitive and the computation of this hesitancy entropy is not simple. In addition, the cross entropy proposed by Liu and Teng (2019) fails for the symmetric linguistic term sets in PLTSs. Thereby, it is valuable to study deeply on the entropy and cross entropy of PLTSs.
(4) Existing decision methods are fruitful to solve decision problems with PLTSs. Nevertheless, some methods (Lin et al., 2019; Xu et al., 2019; Li et al., 2020) can only solve MADM problems, but fail for MAGDM problems. Although these methods (Pang et al., 2016; Wu et al., 2018; Zhang X.F. et al., 2019; Liu et al., 2019) are capable to solve MAGDM problems with PLTSs, DMs’ weights or attribute weights are not considered or assigned in advance, which may result in subjective random weights. Hence, it is interesting to seek a novel method which not only can solve MAGDM problems, but also determine DMs’ weights and attribute weights objectively.
To make up for above limitations, this paper proposes a novel method for solving MAGDM problems with PLTSs. First, two aggregation operators, including PLWAM (probabilistic linguistic weighted arithmetic mean) operator and PLWGM (probabilistic linguistic weighted geometric mean) operator, respectively, are proposed and some desirable properties are studied. To measure the hesitancy degree of a PLTS, a hesitancy index of the PLTS is introduced. Then a general distance measure of PLTSs is defined to measure the deviation between two PLTSs. Considering the fact that the uncertainty of a PLTS includes the fuzziness and hesitancy, a fuzzy entropy and hesitancy entropy of a PLTS are defined and then a total entropy of a PLTS is derived to measure such uncertainties. Meanwhile, a cross entropy of PLTSs is defined to measure the distinction between PLTSs. Afterwards, by minimizing the total entropy and the cross entropy of a DM, an objective program model is built to determine DMs’ weights. Subsequently, individual decision matrices are aggregated into a collective one by the PLWAM operator. To derive attributes weights objectively, a bi-objective program is built by minimizing the total entropy of attribute values with respect to each attribute as well as maximizing the cross entropy between attribute values of alternatives. By defining a new preference function in the form of PLTSs, an improved PL-PROMETHEE method is developed to rank alternatives. Thereby, a novel method is proposed for solving MAGDM problems with PLTSs. A case of car sharing platform selection is applied to show the effectiveness and advantages of the proposed method at length. The primary features of the proposed method are outlined as follows:
(1) Two new probabilistic linguistic average aggregation operators of PLTSs (i.e. PLWAM and PLWAG operators) are proposed. A prominent characteristic of them is that the aggregated result obtained by these two operators is not only a PLTS with the sum of probabilities of possible LTs being equal to 1, but also is consistent with human intuition.
(2) A new generalized distance measure of PLTSs is defined. It is worth mentioning that the hesitancy degree of a PLTS is considered in this distance measure. Thus, the new distance has a stronger distinguishing power. Moreover, a ranking approach is presented to rank PLTSs.
(3) A fuzzy entropy, a hesitancy entropy and a cross entropy of PLTSs are introduced. The fuzzy entropy has desirable properties and the computation of the hesitancy entropy is simple. Meanwhile, the cross entropy can distinguish the deviations between PLTSs with symmetric linguistic term sets.
(4) Based on entropy and cross entropy of PLTSs, distinct objective programs are established to determine DMs’ weights and attribute weights objectively. Finally, an improved PL-PROMETHEE method is developed to rank alternatives.
The remainder of this paper is organized as follows: In Section 2, some basic concepts of PLTSs are reviewed. Moreover, PLWAM and PLWGM operators are proposed. Section 3 introduces a hesitancy index of a PLTS and then develops a generalized distance measure of PLTSs. Based on this distance measure, a ranking approach is presented to sort PLTSs. Section 4 defines several types of entropy of PLTSs, including the fuzzy entropy, hesitancy entropy, total entropy and cross entropy of PLTSs. In Section 5, an improved PL-PROMETHEE method is developed to solve MAGDM problems with PLTSs. Section 6 provides a case study of car sharing platform selection to illustrate the application of the proposed method. Furthermore, comparison analyses are conducted to show advantages of the proposed method. Some conclusions are made in Section 7.

2 Preliminaries

In this section, some definitions and notions related to the PLTS are reviewed. Furthermore, two aggregation operators of PLTSs are proposed and some desirable properties of them are investigated.
Definition 1 (See Xu, 2005).
Let $S=\{{s_{\alpha }}|\alpha =-\tau ,\dots ,-1,0,1,\dots ,\tau \}$ be a finite and totally ordered discrete LTS, where ${s_{\alpha }}$ represents a possible value for a linguistic term, and τ is a positive integer. Especially, the mid-linguistic label ${s_{0}}$ represents an assessment of “indifference”, and the rest of them are placed symmetrically around it. ${s_{-\tau }}$ and ${s_{\tau }}$ are lower and upper bounds of linguistic labels.
To preserve all given linguistic information, Xu (2004) extended the discrete LTS S into a continuous LTS $\bar{S}=\{{s_{\alpha }}|\alpha \in [-\tau ,\tau ]$. If ${s_{\alpha }}\in S$, then ${s_{\alpha }}$ is called an original linguistic term; Otherwise, ${s_{\alpha }}$ is called a virtual linguistic term.
Definition 2 (See Gou and Xu, 2016).
Let $S=\{{s_{\alpha }}|\alpha =-\tau ,\dots ,-1,0,1,\dots ,\tau \}$ be a LTS, the linguistic term ${s_{\alpha }}$ that expresses the equivalent information to the membership degree γ is obtained by a linguistic scale function g:
\[ g:[{s_{-\tau }},{s_{\tau }}]\to [0,1],\hspace{2em}g({s_{\alpha }})=\frac{\alpha +\tau }{2\tau }=\gamma .\]
Additionally, the membership degree γ that expresses the equivalent information to the linguistic term ${s_{\alpha }}$ is obtained by the following function
\[ {g^{-1}}:[0,1]\to [{s_{-\tau }},{s_{\tau }}],\hspace{2em}{g^{-1}}(\gamma )={s_{(2\gamma -1)\tau }}={s_{\alpha }}.\]
Definition 3 (See Pang et al., 2016).
Let $S=\{{s_{\alpha }}|\alpha =-\tau ,\dots ,-1,0,1,\dots ,\tau \}$ be a linguistic term set, a PLTS is defined as
\[ L(p)=\Bigg\{{L^{(k)}}\big({p^{(k)}}\big)\big|{L^{(k)}}\in S,{p^{(k)}}\geqslant 0,\hspace{2.5pt}k=1,2,\dots ,\mathrm{\# }L(p),\hspace{2.5pt}{\sum \limits_{k=1}^{\mathrm{\# }L(p)}}{p^{(k)}}\leqslant 1\Bigg\},\]
where ${L^{(k)}}({p^{(k)}})$ represents the linguistic term ${L^{(k)}}$ associated with the probability ${p^{(k)}}$, and $\mathrm{\# }L(p)$ is the number of all different linguistic terms in $L(p)$.
For a PLTS $L(p)$ with ${\textstyle\sum _{k=1}^{\mathrm{\# }L(p)}}{p^{(k)}}\leqslant 1$, Pang et al. (2016) gave a normalizing method.
Definition 4 (See Pang et al., 2016).
Given a PLTS $L(p)$ with ${\textstyle\sum _{k=1}^{\mathrm{\# }L(p)}}{p^{(k)}}\leqslant 1$, the normalized PLTS $\dot{L}(p)$ is defined as:
\[ \dot{L}(p)=\big\{{L^{(k)}}\big({\dot{p}^{(k)}}\big)\big|k=1,2,\dots ,\mathrm{\# }L(p)\big\},\]
where ${\dot{p}^{(k)}}={p^{(k)}}/{\textstyle\sum _{k=1}^{\mathrm{\# }L(p)}}{p^{(k)}}$ for all $k=1,2,\dots ,\mathrm{\# }L(p)$.
To ensure that the operational results among PLTSs can be straightforwardly determined, Mao et al. (2019) defined an ascending ordered PLTS below.
Definition 5.
Given a PLTS $L(p)=\{{L^{(k)}}({p^{(k)}})|{L^{(k)}}\in S,{p^{(k)}}\geqslant 0,\hspace{2.5pt}k=1,2,\dots ,\mathrm{\# }L(p)\}$, where ${r^{(k)}}$ is the subscript of linguistic term ${L^{(k)}}$, an ascending ordered PLTS can be derived by the following steps:
  • (1) If all elements in a PLTS are with different values of ${r^{(k)}}{p^{(k)}}$, then all elements are arranged according to the value of ${r^{(k)}}{p^{(k)}}$ ($k=1,2,\dots ,\mathrm{\# }L(p)$) in an ascending order;
  • (2) If two or more elements with equal values of ${r^{(k)}}{p^{(k)}}$, then
    • (a) When the subscripts ${r^{(k)}}$ ($k=1,2,\dots ,\mathrm{\# }L(p)$) are unequal, ${r^{(k)}}{p^{(k)}}$ ($k=1,2,\dots ,\mathrm{\# }L(p)$) are arranged according to values of ${r^{(k)}}$ ($k=1,2,\dots ,\mathrm{\# }L(p)$) in an ascending order;
    • (b) When the subscripts ${r^{(k)}}$ ($k=1,2,\dots ,\mathrm{\# }L(p)$) are equal, ${r^{(k)}}{p^{(k)}}$ ($k=1,2,\dots ,\mathrm{\# }L(p)$) are arranged according to values of ${p^{(k)}}$ ($k=1,2,\dots ,\mathrm{\# }L(p)$) in an ascending order.
If a PLTS $L(p)$ is normalized by Definition 4 and all elements of $L(p)$ is ordered by Definition 5, then $L(p)$ is converted into a normalized ordered PLTS $\bar{L}(p)=\{{\bar{L}^{(k)}}({\bar{p}^{(k)}})|k=1,2,\dots ,\mathrm{\# }\bar{L}(p)\}$.
In real decision making problems, the numbers of linguistic terms in two different PLTSs are often different. This makes trouble to operate. In order to make them have the same number of linguistic terms, Pang et al. (2016) provided a method to add the number of linguistic terms for PLTSs in which the number of linguistic terms is relatively small as follows.
Definition 6.
Let ${L_{1}}(p)$ and ${L_{2}}(p)$ be two PLTSs, where ${L_{1}}(p)=\{{L_{1}^{(k)}}({p_{1}^{(k)}})|k=1,2,\dots ,\mathrm{\# }{L_{1}}(p)\}$ and ${L_{2}}(p)=\{{L_{2}^{(k)}}({p_{2}^{(k)}})|k=1,2,\dots ,\mathrm{\# }{L_{2}}(p)\}$, $\mathrm{\# }{L_{1}}(p)$ and $\mathrm{\# }{L_{2}}(p)$ are the numbers of linguistic terms in ${L_{1}}(p)$ and ${L_{2}}(p)$, respectively. If $\mathrm{\# }{L_{1}}(p)>\mathrm{\# }{L_{2}}(p)$, then add $\mathrm{\# }{L_{1}}(p)-\mathrm{\# }{L_{2}}(p)$ linguistic terms to ${L_{2}}(p)$. The added linguistic terms are the smallest linguistic terms in ${L_{2}}(p)$ and the probabilities of added linguistic terms are zero.
Definition 7 (See Mao et al., 2019).
Let ${\bar{L}_{1}}(p)$ and ${\bar{L}_{2}}(p)$ be two PLTSs, where ${\bar{L}_{1}}(p)=\{{\bar{L}_{1}^{(k)}}({\bar{p}_{1}^{(k)}})|k=1,2,\dots ,\mathrm{\# }{\bar{L}_{1}}(p)\}$, ${\bar{L}_{2}}(p)=\{{\bar{L}_{2}^{(k)}}({\bar{p}_{2}^{(k)}})|k=1,2,\dots ,\mathrm{\# }{\bar{L}_{2}}(p)\}$ and $\mathrm{\# }{\bar{L}_{1}}(p)=\mathrm{\# }{\bar{L}_{2}}(p)$. Then the Euclidean distance $d({\bar{L}_{1}}(p),{\bar{L}_{2}}(p))$ between ${\bar{L}_{1}}(p)$ and ${\bar{L}_{2}}(p)$ is defined as
(1)
\[ d\big({\bar{L}_{1}}(p),{\bar{L}_{2}}(p)\big)=\sqrt{{\sum \limits_{k=1}^{\mathrm{\# }{\bar{L}_{1}}(p)}}\big({\bar{p}_{1}^{(k)}}g\big({\bar{L}_{1}^{(k)}}\big)-{\bar{p}_{2}^{(k)}}g\big({\bar{L}_{2}^{(k)}}\big)\big)\big/\mathrm{\# }{\bar{L}_{1}}(p)}.\]
Definition 8 (See Wu and Liao, 2018).
Given a PLTS $L(p)=\{{L^{(k)}}({p^{(k)}})|{L^{(k)}}\in S,\hspace{2.5pt}{p^{(k)}}\geqslant 0,\hspace{2.5pt}k=1,2,\dots ,\mathrm{\# }L(p)\}$, the expected function of $L(p)$ is defined as
(2)
\[ e\big(L(p)\big)={\sum \limits_{k=1}^{\mathrm{\# }L(p)}}g\big({L^{(k)}}\big){p^{(k)}}\Big/{\sum \limits_{k=1}^{\mathrm{\# }L(p)}}{p^{(k)}}.\]

2.1 Aggregation Operators of PLTSs

This subsection reviews the operational laws of PLTSs presented by Gou and Xu (2016), and then proposes PLWAM and PLWGM aggregation operators, which are used to aggregate the decision information in GDM discussed in later sections.
Definition 9 (See Gou and Xu, 2016).
Let $S=\{{s_{\alpha }}|\alpha =-\tau ,\dots ,-1,0,1,\dots ,\tau \}$, $L(p)$, ${L_{1}}(p)$ and ${L_{2}}(p)$ be three PLTSs, and λ be a positive real number, it stipulates:
  • (i) $\begin{array}[t]{r@{\hskip4.0pt}c@{\hskip4.0pt}l}{L_{1}}(p)\oplus {L_{2}}(p)& =& {g^{-1}}({\textstyle\bigcup _{\genfrac{}{}{0pt}{}{{k_{1}}=1,2,\dots ,\mathrm{\# }{L_{1}}(p)}{{k_{1}}=1,2,\dots ,\mathrm{\# }{L_{2}}(p)}}}\big\{\big(g\big({L_{1}^{({k_{1}})}}\big)+g\big({L_{2}^{({k_{2}})}}\big)\\ {} & & -g\big({L_{1}^{({k_{1}})}}\big)g\big({L_{2}^{({k_{2}})}}\big)\big)\big({p_{1}^{({k_{1}})}}{p_{2}^{({k_{2}})}}\big)\big\}\big)\end{array}$;
  • (ii) ${L_{1}}(p)\otimes {L_{2}}(p)={g^{-1}}\big({\textstyle\bigcup _{\genfrac{}{}{0pt}{}{{k_{1}}=1,2,\dots ,\mathrm{\# }{L_{1}}(p)}{{k_{1}}=1,2,\dots ,\mathrm{\# }{L_{2}}(p)}}}\big\{\big(g\big({L_{1}^{({k_{1}})}}\big)g\big({L_{2}^{({k_{2}})}}\big)\big)\big({p_{1}^{({k_{1}})}}{p_{2}^{({k_{2}})}}\big)\big\}\big)$;
  • (iii) $\lambda L(p)={g^{-1}}\big({\textstyle\bigcup _{k=1,2,\dots ,\mathrm{\# }L(p)}}\big\{1-{\big(1-g\big({L^{(k)}}\big)\big)^{\lambda }},\big({p^{(k)}}\big)\big\}\big)$;
  • (iv) ${\big(L(p)\big)^{\lambda }}={g^{-1}}\big({\cup _{k=1,2,\dots ,\mathrm{\# }L(p)}}\big\{{\big(g\big({L^{(k)}}\big)\big)^{\lambda }},\big({p^{(k)}}\big)\big\}\big)$;
  • (v) The complement of $L(p)$: ${(L(p))^{(c)}}={g^{-1}}({\textstyle\bigcup _{k=1,2,\dots ,\mathrm{\# }L(p)}}\{(1-g({L^{(k)}}))({p^{(k)}})\})$.
Remark 1.
As reviewed in Mi et al. (2020), up to now existing operations of PLTSs can be divided into five types: symbolic-based computation (Pang et al., 2016), semantic-based computation (Gou and Xu, 2016; Wu and Liao, 2019), triangular norms-based computation (Klement et al., 2000), evidence reasoning-based computation (Li and Wei, 2019) and double alpha-cut-based computation (Jiang and Liao, 2020). Among these operations, only the semantic-based computation (Gou and Xu, 2016; Wu and Liao, 2019) and the evidence reasoning-based computation (Li and Wei, 2019) satisfy the closure of operation of PLTSs. The semantic-based computations (Gou and Xu, 2016; Wu and Liao, 2019) defined common operations, such as additive, subtraction and numerical operations of PLTSs. However, the computation (Li and Wei, 2019) only defined the additive operation of PLTSs. To sum up, the semantic-based computation (Gou and Xu, 2016; Wu and Liao, 2019) is more reasonable compared with other computations. Therefore, it is natural to choose this kind of computation to complete the operations of PLTSs. The differences between operations in Gou and Xu (2016) and Wu and Liao (2019) are the selections of linguistic scale functions and the tools for handling two PLTSs with different numbers of LTs. Compared with the computation in Wu and Liao (2019) (see Definition 3 in Wu and Liao, 2019), the computation in Gou and Xu (2016) is easier to operate and more widely used. For example, many achievements (Bai et al., 2017; Liang et al., 2018; Feng et al., 2020) have been gained based on the computation in Gou and Xu (2016), while only one method (Wu et al., 2019) was developed by the computation in Wu and Liao (2019). At this point, we choose the computation in Gou and Xu (2016) to conduct the operations of PLTSs in this paper.
In virtue of Definition 9, a PLWAM operator of PLTS is presented.
Definition 10.
Let ${L_{i}}(p)=\{{L_{i}^{(k)}}({p_{i}^{(k)}})|k=1,2,\dots ,\mathrm{\# }{L_{i}}(p)\}$ $(i=1,2,\dots ,n)$ be n PLTSs. A $\mathrm{PLWAM}$ operator is a function ${f^{n}}\to f$ such that:
(3)
\[ \mathrm{PLWAM}\big({L_{1}}(p),{L_{2}}(p),\dots ,{L_{n}}(p)\big)={\omega _{1}}{L_{1}}(p)\oplus {\omega _{2}}{L_{2}}(p)\oplus \cdots \oplus {\omega _{n}}{L_{n}}(p),\]
where $\boldsymbol{\omega }={({\omega _{1}},{\omega _{2}},\dots ,{\omega _{n}})^{\operatorname{T}}}$ is the weight vector of ${L_{i}}(p)$ ($i=1,2,\dots ,n$), satisfying $0\leqslant {\omega _{i}}\leqslant 1(i=1,2,\dots ,n)$ and ${\textstyle\sum _{i=1}^{n}}{\omega _{i}}=1$.
Theorem 1.
Given n PLTSs ${L_{1}}(p),{L_{2}}(p),\dots ,{L_{n}}(p)$, the aggregated value by the PLWAM operator is also a PLTS as
(4)
\[\begin{aligned}{}& \mathrm{PLWAM}\big({L_{1}}(p),{L_{2}}(p),\dots ,{L_{n}}(p)\big)\\ {} & \hspace{1em}={g^{-1}}\Bigg(\bigcup \limits_{\genfrac{}{}{0pt}{}{{k_{j}}=1,2,\dots ,\mathrm{\# }{L_{j}}(p)}{j=1,2,\dots ,n}}\Bigg\{\Bigg(1-{\prod \limits_{j=1}^{n}}{\big(1-g\big({L_{j}^{({k_{j}})}}\big)\big)^{{\omega _{j}}}}\Bigg)\big({p_{1}^{({k_{1}})}}{p_{2}^{({k_{2}})}}\cdots {p_{n}^{({k_{n}})}}\big)\Bigg\}\Bigg).\end{aligned}\]
Furthermore,
(5)
\[ \sum \limits_{\genfrac{}{}{0pt}{}{{k_{j}}=1,2,\dots ,\mathrm{\# }{L_{j}}(p)}{j=1,2,\dots ,n}}{p_{1}^{({k_{1}})}}{p_{2}^{({k_{2}})}}\cdots {p_{n}^{({k_{n}})}}=1.\]
Proof.
Please see Appendix A.  □
Analogously, a $\mathrm{PLWGM}$ operator of PLTSs is presented.
Definition 11.
Let ${L_{i}}(p)=\{{L_{i}^{k}}(p)|k=1,2,\dots ,\mathrm{\# }{L_{i}}(p)\}$ $(i=1,2,\dots ,n)$ be n PTTSs. A $\mathrm{PLWGM}$ operator of PLTSs is a function ${f^{n}}\to f$ such that:
(6)
\[ \mathrm{PLWGM}\big({L_{1}}(p),{L_{2}}(p),\dots ,{L_{n}}(p)\big)={\big({L_{1}}(p)\big)^{{\omega _{1}}}}\otimes {\big({L_{2}}(p)\big)^{{\omega _{2}}}}\otimes \cdots \otimes {\big({L_{n}}(p)\big)^{{\omega _{n}}}},\]
where $\boldsymbol{\omega }={({\omega _{1}},{\omega _{2}},\dots ,{\omega _{n}})^{\operatorname{T}}}$ is the weight vector of ${L_{i}}(p)$ ($i=1,2,\dots ,n$), satisfying $0\leqslant {\omega _{i}}\leqslant 1(i=1,2,\dots ,n)$ and ${\textstyle\sum _{i=1}^{n}}{\omega _{i}}=1$.
In a way similar to the proving process of Theorem 1, Theorem 2 can be proved based on the operational laws (ii) and (iv) in Definition 9.
Theorem 2.
Given n PLTSs ${L_{1}}(p),{L_{2}}(p),\dots ,{L_{n}}(p)$, the aggregated value by the PLWGM operator is also a PLTS as follows
(7)
\[\begin{aligned}{}& \mathrm{PLWGM}\big({L_{1}}(p),{L_{2}}(p),\dots ,{L_{n}}(p)\big)\\ {} & \hspace{1em}={g^{-1}}\Bigg(\bigcup \limits_{\genfrac{}{}{0pt}{}{{k_{j}}=1,2,\dots ,\mathrm{\# }{L_{j}}(p)}{j=1,2,\dots ,n}}\Bigg\{\Bigg({\prod \limits_{j=1}^{n}}{\big(g\big({L_{j}^{({k_{j}})}}\big)\big)^{{\omega _{j}}}}\Bigg)\big({p_{1}^{({k_{1}})}}{p_{2}^{({k_{2}})}}\cdots {p_{n}^{({k_{n}})}}\big)\Bigg\}\Bigg),\end{aligned}\]
and
(8)
\[ \sum \limits_{\genfrac{}{}{0pt}{}{{k_{j}}=1,2,\dots ,\mathrm{\# }{L_{j}}(p)}{j=1,2,\dots ,n}}{p_{1}^{({k_{1}})}}{p_{2}^{({k_{2}})}}\cdots {p_{n}^{({k_{n}})}}=1.\]
Remark 2.
Compared with existing aggregation operators (Pang et al., 2016; Mao et al., 2019; Zhang, 2018), the proposed operators have significant advantages. For instance, they satisfy the closure of operations and the computation results are closer to human intuition. These advantages can be verified by Example 1 in what follows.
Example 1.
Given a LTS $S=\{{s_{-3}},{s_{-2}},{s_{-1}},{s_{0}},{s_{1}},{s_{2}},{s_{3}}\}$. Let ${L_{1}}(p)=\{{s_{2}}(0.4),{s_{3}}(0.4)\}$, ${L_{2}}(p)=\{{s_{1}}(0.1),{s_{2}}(0.6),{s_{3}}(0.1)\}$ and ${L_{3}}(p)=\{{s_{2}}(0.2),{s_{3}}(0.1)\}$ be three PLTSs. The weight vector is $\boldsymbol{w}={(0.3,0.2,0.5)^{\mathrm{T}}}$. By applying existing arithmetic weighted aggregation operators and the proposed PLWAM operator, aggregated results are obtained and shown in Table 1.
Table 1
Results obtained by different aggregation operators.
Operators Results PLTS Information lost Information distortion
PLWA (Pang et al., 2016) $\{{s_{0}},{s_{0.2}},{s_{0.4}},{s_{0.6}},{s_{0.75}},{s_{0.95}},{s_{1.15}},{s_{1.5}},{s_{1.7}},{s_{2.25}}\}$ No Yes Yes
GPLHWA (Mao et al., 2019) $\{{s_{1.85}}(0.14),{s_{2.00}}(0.16),{s_{3}}(1.36)\}$ No Yes Yes
PL-WAA
(Zhang, 2018) $\begin{array}{l}\{{s_{-3}}(0.064),{s_{-2}}(0.064),{s_{-1}}(0.064),{s_{0}}(0.064),\\ {} {s_{1}}(0.084),{s_{2}}(0.424),{s_{3}}(0.254)\}\end{array}$ Yes Yes Yes
PLWAM in this paper $\{{s_{1.85}}(0.04),{s_{2.00}}(0.25),{s_{3}}(0.71)\}$ Yes No No
It is observed from Table 1 that the PLWAM operator proposed in this paper has following merits:
(1) The PLWAM operator satisfies the closure of operations, i.e. the aggregated result of PLTSs obtained by the PLWAM operator is still a PLTS. However, the aggregated result obtained by the PLWA operator (Pang et al., 2016) is a linguistic term rather than a PLTS. Thus, the probability information is lost. Although the aggregated result obtained by the GPLHWA operator (Mao et al., 2019) seems to be a PLTS, the sum of probabilities of all possible LTs in this result is more than 1. Therefore, this result is not a PLTS in a strict sense. Hence, there exists information distortion while using the GPLHWA operator.
(2) The aggregated result obtained by the proposed PLWAM operator is more consistent with human intuition. Observing ${L_{1}}(p)$, ${L_{2}}(p)$ and ${L_{3}}(p)$, it is deduced that the aggregated result should be larger than ${s_{1}}$ and smaller than ${s_{3}}$. Furthermore, the probability of ${s_{3}}$ should be more than that of ${s_{1}}$ because three PLTSs include ${s_{3}}$ but only ${L_{2}}(p)$ contains ${s_{1}}$ with a small probability. By the PLWAM operator proposed in this paper, the aggregated result (see Table 1) is in accordance with this analysis indeed. On the other hand, the aggregated result obtained by the PL-WAA operator (Zhang, 2018) is counterintuitive because LTs ${s_{-3}}$, ${s_{-2}}$ and ${s_{-1}}$ are all included in this result, which means that the information is distorted in aggregating process.
(3) Observing Table 1, the aggregated results obtained by the proposed PLWAM operator and the PL-WAA (Zhang, 2018) operator are both PLTSs. Nevertheless, this paper proves that all the aggregated results by the PLWAM operator are all PLTSs for any PLTSs (see Theorem 1), but Zhang (2018) failed to do so.
Analogously, the PLWGM operator has above advantages.

3 A Hesitancy Index of a PLTS and an Approach to Ranking PLTSs

In this section, a hesitancy index of a PLTS is introduced to measure the hesitancy degree of a PLTS, and then a new generalized distance measure between two PLTSs is developed. Finally, based on the proposed distance measure, a TOPSIS-based approach is presented to rank PLTSs.

3.1 A Hesitancy Index of a PLTS

Definition 12.
Suppose a normalized PLTS $\dot{L}(p)=\{{L^{(k)}}({\dot{p}^{(k)}})|k=1,2,\dots ,\mathrm{\# }L(p)\}$. A hesitancy index of $\dot{L}(p)$ is defined as
(9)
\[ h\big(\dot{L}(p)\big)=\left\{\begin{array}{l@{\hskip4.0pt}l}2{\textstyle\textstyle\sum _{k=1}^{\mathrm{\# }\dot{L}(p)}}|g({L^{(k)}})-\bar{g}|{\dot{p}^{(k)}}\hspace{1em}& \mathrm{\# }\dot{L}(p)>1,\\ {} 0\hspace{1em}& \mathrm{\# }\dot{L}(p)=1,\end{array}\right.\]
where $\bar{g}=\frac{1}{\mathrm{\# }\dot{L}(p)}{\textstyle\sum _{k=1}^{\mathrm{\# }\dot{L}(p)}}g({L^{(k)}})$.
It is clear from Eq. (9) that the hesitancy index $h(\dot{L}(p))$ satisfies $0\leqslant h(\dot{L}(p))\leqslant 1$. Especially, $h(\dot{L}(p))=0$ if $\dot{L}(p)=\{{s_{\alpha }}(1)|{s_{\alpha }}\in S\}$ and $h(\dot{L}(p))=1$ if $\dot{L}(p)=\{{s_{-\tau }}(0.5),{s_{\tau }}(0.5)\}$. This result is in accordance with human intuition.

3.2 A General Distance Measure Between PLTSs

First, this subsection reviews existing distance measures between PLTSs (Pang et al., 2016; Mao et al., 2019; Zhang et al., 2016), and then develops a general distance measure between PLTSs based on the proposed hesitancy index. In the end, comparative analyses of the proposed distance measure and existing ones are conducted.
Let $S=\{{s_{\alpha }}|\alpha =-\tau ,\dots ,-1,0,1,\dots ,\tau \}$ be a LTS. ${\bar{L}_{1}}(p)$ and ${\bar{L}_{2}}(p)$ are two PLTSs, where ${\bar{L}_{1}}(p)=\{{\bar{L}_{1}^{(k)}}({\bar{p}_{1}^{(k)}})|k=1,2,\dots ,\mathrm{\# }{\bar{L}_{1}}(p)\}$, ${\bar{L}_{2}}(p)=\{{\bar{L}_{2}^{(k)}}({\bar{p}_{2}^{(k)}})|k=1,2,\dots ,\mathrm{\# }{\bar{L}_{2}}(p)\}$ and $\mathrm{\# }{\bar{L}_{1}}(p)=\mathrm{\# }{\bar{L}_{2}}(p)=\mathrm{\# }\bar{L}(p)$.
Pang et al. (2016) defined a deviation degree between two PLTSs as
(10)
\[ {d_{P}}\big({\bar{L}_{1}}(p),{\bar{L}_{2}}(p)\big)=\sqrt{{\sum \limits_{k=1}^{\mathrm{\# }\bar{L}(p)}}\big({p_{1}^{(k)}}{r_{1}^{(k)}}-{p_{2}^{(k)}}{r_{2}^{(k)}}\big)\Big/\mathrm{\# }\bar{L}(p)},\]
where ${r_{1}^{(k)}}$ and ${r_{2}^{(k)}}$ are the subscripts of linguistic terms of ${\bar{L}_{1}}(p)$ and ${\bar{L}_{2}}(p)$, respectively.
Later, Zhang et al. (2016) introduced a distance measure between PLTSs as below.
(11)
\[ {d_{Z}}\big({\bar{L}_{1}}(p),{\bar{L}_{2}}(p)\big)={\sum \limits_{k=1}^{\mathrm{\# }\bar{L}(p)}}p\big({r_{1}^{(k)}},{r_{2}^{(k)}}\big)d\big({r_{1}^{(k)}},{r_{2}^{(k)}}\big),\]
where $p({r_{1}^{(k)}},{r_{2}^{(k)}})=p({r_{1}^{(k)}})p({r_{2}^{(k)}})={p_{1}^{(k)}}{p_{2}^{(k)}}$ and $d({r_{1}^{(k)}},{r_{2}^{(k)}})=({r_{1}^{(k)}}-{r_{2}^{(k)}})/T$, T is the number of linguistic terms in S.
Mao et al. (2019) argued some limitations of above distance measures and developed an Euclidean distance as
(12)
\[ {d_{M}}\big({\bar{L}_{1}}(p),{\bar{L}_{2}}(p)\big)=\sqrt{{\sum \limits_{k=1}^{\mathrm{\# }\bar{L}(p)}}\big({p_{1}^{(k)}}g\big({\bar{L}_{1}^{(k)}}\big)-{p_{2}^{(k)}}g\big({\bar{L}_{2}^{(k)}}\big)\big)\Big/\mathrm{\# }\bar{L}(p)}.\]
This paper defines a general distance measure between PLTSs by the proposed hesitancy index.
Definition 13.
Given two PLTSs ${\bar{L}_{1}}(p)=\{{\bar{L}_{1}^{(k)}}({\bar{p}_{1}^{(k)}})|k=1,2,\dots ,\mathrm{\# }{\bar{L}_{1}}(p)\}$ and ${\bar{L}_{2}}(p)=\{{\bar{L}_{2}^{(k)}}({\bar{p}_{2}^{(k)}})|k=1,2,\dots ,\mathrm{\# }{\bar{L}_{2}}(p)\}$, where $\mathrm{\# }{\bar{L}_{1}}(p)=\mathrm{\# }{\bar{L}_{2}}(p)=\mathrm{\# }\bar{L}(p)$, a general distance measure of PLTSs is defined as
(13)
\[\begin{aligned}{}{d_{G}}\big({\bar{L}_{1}}(p),{\bar{L}_{2}}(p)\big)& =(\frac{1}{2}\Bigg(\frac{1}{\mathrm{\# }\bar{L}(p)}{\sum \limits_{k=1}^{\mathrm{\# }L(p)}}\frac{1}{2}{\big({\big|g\big({\bar{L}_{1}^{(k)}}\big)-g\big({\bar{L}_{2}^{(k)}}\big)\big|^{q}}+\big|{\bar{p}_{1}^{(k)}}-{\bar{p}_{2}^{(k)}}\big|\big)^{q}}\\ {} & +{\big|h\big({\bar{L}_{1}}(p)\big)-h\big({\bar{L}_{2}}(p)\big|^{q}}\big)\Bigg){^{1/q}},\end{aligned}\]
where $q\geqslant 1$.
Specially, when $q=1$, a Manhattan distance is obtained as
(14)
\[\begin{aligned}{}{d_{M}}\big({\bar{L}_{1}}(p),{\bar{L}_{2}}(p)\big)& =\frac{1}{2}(\frac{1}{\mathrm{\# }\bar{L}(p)}{\sum \limits_{k=1}^{\mathrm{\# }L(p)}}\frac{1}{2}\big(\big|g\big({\bar{L}_{1}^{(k)}}\big)-g\big({\bar{L}_{2}^{(k)}}\big)\big|+\big|{\bar{p}_{1}^{(k)}}-{\bar{p}_{2}^{(k)}}\big|\big)\\ {} & \hspace{1em}+\big|h\big({\bar{L}_{1}}(p)\big)-h\big({\bar{L}_{2}}(p)\big|\big),\end{aligned}\]
when $q=2$, a novel Euclidean distance of PLTSs is developed as
(15)
\[\begin{aligned}{}& {d_{E}}\big({\bar{L}_{1}}(p),{\bar{L}_{2}}(p)\big)\\ {} & \hspace{1em}=\displaystyle \sqrt{\frac{1}{2}(\frac{1}{\mathrm{\# }\bar{L}(p)}{\sum \limits_{k=1}^{\mathrm{\# }L(p)}}\frac{1}{2}\big({\big(g\big({\bar{L}_{1}^{(k)}}\big)-g\big({\bar{L}_{2}^{(k)}}\big)\big)^{2}}+{\big({\bar{p}_{1}^{(k)}}-{\bar{p}_{2}^{(k)}}\big)^{2}}\big)+\big(h\big({\bar{L}_{1}}(p)\big)-h{\big({\bar{L}_{2}}(p)\big)^{2}}\big)}.\end{aligned}\]
According to Eqs. (14) and (15), it is easily proved that ${d_{M}}$ and ${d_{E}}$ have desirable properties described in Theorem 3.
Theorem 3.
For two PLTSs ${\bar{L}_{1}}(p)$ and ${\bar{L}_{2}}(p)$ with $\mathrm{\# }{\bar{L}_{1}}(p)=\mathrm{\# }{\bar{L}_{2}}(p)$, ${d_{M}}$ and ${d_{E}}$ are the novel Manhattan and Euclidean distances of PLTSs, respectively. They satisfy:
  • (i) $0\leqslant {d_{M}}\big({\bar{L}_{1}}(p),{\bar{L}_{2}}(p)\big),{d_{E}}\big({\bar{L}_{1}}(p),{\bar{L}_{2}}(p)\big)\leqslant 1$;
  • (ii) ${d_{M}}\big({\bar{L}_{1}}(p),{\bar{L}_{2}}(p)\big)={d_{E}}\big({\bar{L}_{1}}(p),{\bar{L}_{2}}(p)\big)=0$ if and only if ${\bar{L}_{1}}(p)={\bar{L}_{2}}(p)$;
  • (iii) ${d_{M}}\big({\bar{L}_{1}}(p),{\bar{L}_{2}}(p)\big)={d_{M}}\big({\bar{L}_{2}}(p),{\bar{L}_{1}}(p)\big)$ and ${d_{E}}\big({\bar{L}_{1}}(p),{\bar{L}_{2}}(p)\big)={d_{E}}\big({\bar{L}_{2}}(p),{\bar{L}_{1}}(p)\big)$.
Remark 3.
Compared with existing distance measures in Pang et al. (2016), Mao et al. (2019) and Zhang et al. (2016), the distance measures defined in Definition 13 have a stronger distinguishing power. Table 2 shows this advantage explicitly.
It is shown from Table 2 that PLTSs in each pair are not identical completely. Therefore, the distance between each pair PLTSs should be more than zero intuitively. The proposed distance measures do this exactly. However, the distances by existing distance measures (Pang et al., 2016; Mao et al., 2019; Zhang et al., 2016) are all zeros, so the difference of PLTSs in each pair cannot be distinguished. This means that the proposed distance measures have stronger distinguishing powers, which can be attributed to the following factors:
Table 2
Distancesbetween PLTSs obtained by different distance measures.
PLTSs Different distance measures Distances between PLTSs
${\bar{L}_{1}}(p)=\{{s_{2}}(0.8),{s_{3}}(0.2)\}$
${\bar{L}_{2}}(p)=\{{s_{2}}(0.5),{s_{3}}(0.5)\}$
Zhang’s distance (Zhang et al., 2016) ${d_{Z}}({\bar{L}_{1}}(p),{\bar{L}_{2}}(p))=0$
The Manhattan distance proposed in this paper ${d_{M}}({\bar{L}_{1}}(p),{\bar{L}_{2}}(p))=0.075>0$
The Euclidean distance proposed in this paper ${d_{E}}({\bar{L}_{1}}(p),{\bar{L}_{2}}(p))=0.15>0$
${\bar{L}_{3}}(p)=\{{s_{0}}(0.4),{s_{1}}(0.6)\}$ ${\bar{L}_{4}}(p)=\{{s_{0}}(0.8),{s_{3}}(0.2)\}$ Pang’s distance (Pang et al., 2016) ${d_{P}}({\bar{L}_{3}}(p),{\bar{L}_{4}}(p))=0$
The Manhattan distance proposed in this paper ${d_{M}}({\bar{L}_{3}}(p),{\bar{L}_{4}}(p))=0.3083>0$
The Euclidean distance proposed in this paper ${d_{E}}({\bar{L}_{3}}(p),{\bar{L}_{4}}(p))=0.3308>0$
${\bar{L}_{5}}(p)=\{{s_{-3}}(0.8),{s_{1}}(0.2)\}$
${\bar{L}_{6}}(p)=\{{s_{-3}}(0.7333),{s_{0}}(0.2667)\}$
The Mao’s Euclidean distance (Mao et al., 2019) $d({\bar{L}_{5}}(p),{\bar{L}_{6}}(p))=0$
The Manhattan distance proposed in this paper ${d_{M}}({\bar{L}_{5}}(p),{\bar{L}_{6}}(p))=0.1208>0$
The Euclidean distance proposed in this paper ${d_{E}}({\bar{L}_{5}}(p),{\bar{L}_{6}}(p))=0.1359>0$
Note: The considered LTS in Table 2 is $S=\{{s_{-3}},{s_{-2}},{s_{-1}},{s_{0}},{s_{1}},{s_{2}},{s_{3}}\}$.
(1) According to Zhang’s distance (Zhang et al., 2016) (i.e. Eq. (11)), the distance between two PLTSs is zero if these two PLTSs have the same possible LTs regardless of their probabilities. Hence, such PLTSs, like ${\bar{L}_{1}}(p)$ and ${\bar{L}_{2}}(p)$ in Table 2, cannot be distinguished even if they are not remarkably identical.
(2) From Eq. (10), it is noticed that Pang’s distance (Pang et al., 2016) has some limitations: the distance between any two PLTSs is zero if these PLTSs satisfy ${p_{1}^{(k)}}{r_{1}^{(k)}}={p_{2}^{(k)}}{r_{2}^{(k)}}$ $(k=1,2,\dots ,\mathrm{\# }\bar{L}(P))$ for all possible LTs. In fact, PLTSs satisfying these conditions (for example, ${\bar{L}_{3}}(p)$ and ${\bar{L}_{4}}(p)$) are different.
(3) Although Mao’s distance (Mao et al., 2019) (i.e. Eq. (12)) improved Pang’s distance (Pang et al., 2016) by replacing the subscripts ${r_{i}^{(k)}}$ with corresponding linguistic scale functions $g({\bar{L}_{i}^{(k)}})$ ($i=1,2$; $k=1,2,\dots ,\mathrm{\# }\bar{L}(P)$), a limitation still cannot be overcome that the distance between such PLTSs satisfying ${p_{1}^{(k)}}g({\bar{L}_{1}^{(k)}})={p_{2}^{(k)}}g({\bar{L}_{2}^{(k)}})(k=1,2,\dots ,\mathrm{\# }\bar{L}(P))$, like ${\bar{L}_{5}}(p)$ and ${\bar{L}_{6}}(p)$, is zero.
(4) The distance measures in this paper are developed by considering each possible element of a PLTS $\bar{L}(p)$ as a two-dimensional vector (i.e. ${\bar{L}^{(k)}}({\bar{p}^{(k)}})=(g({\bar{L}^{(k)}}),{\bar{p}^{(k)}})$) and then extending the corresponding classical distance measures into the PLTS context. Furthermore, it is worth noticing that the proposed distances take the hesitancy index of a PLTS into consideration. Thus, the distances proposed in this paper overcome above limitations and strengthen the distinguishing power.

3.3 A Ranking Approach for PLTSs

Based on the developed distance measures, this subsection introduces a TOPSIS based ranking approach to comparing PLTSs.
Let $L{(p)^{+}}=\{{s_{\tau }}(1)\}$ be a positive ideal PLTS and $L{(p)^{-}}=\{{s_{-\tau }}(1)\}$ be a negative ideal PLTS. According to the TOPSIS method, the closer a PLTS $L(p)=\{{L^{(k)}}({p^{(k)}})|k=1,2,\dots ,\mathrm{\# }L(p)\}$ is to $L{(p)^{+}}$ and at the same time the farther $L(p)$ to $L{(p)^{-}}$, the better the PLTS $L(p)$. Thus, a closeness degree of the PLTS $L(p)$ is defined as
(16)
\[ T\big(L(p)\big)=\frac{{d_{M}}(\bar{L}(p),\bar{L}{(p)^{-}})}{{d_{M}}(\bar{L}(p),\bar{L}{(p)^{+}})+{d_{M}}(\bar{L}(p),\bar{L}{(p)^{-}})},\]
where $\bar{L}(p)$, $\bar{L}{(p)^{+}}$ and $\bar{L}{(p)^{-}}$ are the normalized ordered PLTSs corresponding to $L(p)$, $L{(p)^{+}}$ and $L{(p)^{-}}$, respectively. ${d_{M}}(\bar{L}(p),\bar{L}{(p)^{+}})$ and ${d_{M}}(\bar{L}(p),\bar{L}{(p)^{-}})$ are the Manhattan distances of $\bar{L}(p)$ from $\bar{L}{(p)^{+}}$ and $\bar{L}{(p)^{-}}$, respectively.
In virtue of the closeness degree, a ranking approach for PLTSs is introduced.
Definition 14.
Let ${L_{1}}(p)$ and ${L_{2}}(p)$ be two PLTSs, where ${L_{1}}(p)=\{{L_{1}^{(k)}}({p_{1}^{(k)}})|k=1,2,\dots ,\mathrm{\# }{L_{1}}(p)\}$ and ${L_{2}}(p)=\{{L_{2}^{(\delta )}}({p_{2}^{(\delta )}})|\delta =1,2,\dots ,\mathrm{\# }{L_{2}}(p)\}$. Thus, it is stipulated as
  • (i) If $T({L_{1}}(p))>T({L_{2}}(p))$, then ${L_{1}}(p)$ is bigger than ${L_{2}}(p)$, denoted by ${L_{1}}(p)\succ {L_{2}}(p)$;
  • (ii) If $T({L_{1}}(p))=T({L_{2}}(p))$, then ${L_{1}}(p)$ is indifferent to ${L_{2}}(p)$, denoted by ${L_{1}}(p)\sim {L_{2}}(p)$;
  • (iii) If $T({L_{1}}(p))<T({L_{2}}(p))$, then ${L_{1}}(p)$ is smaller to ${L_{2}}(p)$, denoted by ${L_{1}}(p)<{L_{2}}(p)$.

4 Entropy and Cross Entropy of PLTSs

In order to judge the quality of decision information provided by DMs and the discriminations between DMs in the sequel, this section develops some entropy measures and a cross entropy measure.

4.1 Entropy measures of PLTSs

This subsection addresses the entropy of PLTSs and proposes some new entropy measures of PLTSs. The main motivations are outlined as follows: (1) Measure the uncertainty of a PLTS neatly. Although Lin et al. (2019) defined an entropy measure of a PLTS, the distinguishing power of this measure is not high and cannot neatly measure the uncertainty of a PLTS. Therefore, it is necessary to develop some new entropy measures of a PLTS. (2) Overcome the shortcomings of entropy measures proposed by Liu et al. (2018). As Liu et al. (2018) stated that, the uncertainty of a PLTS includes the fuzziness and the hesitancy. To measure such fuzziness and hesitancy, Liu et al. (2018) defined a fuzzy entropy measure and a hesitancy entropy measure of a PLTS respectively. However, these entropy measures have two shortcomings: one is that some properties of the fuzzy entropy measure are not consistent with intuition. The other is that the computation of the hesitancy entropy is time-consuming. To overcome these shortcomings, this subsection proposes a new fuzzy entropy measure with desirable properties and a hesitancy entropy measure with simple computation, respectively. Finally, the total entropy of a PLTS is defined by combining the proposed fuzzy entropy and the hesitancy entropy.

4.1.1 The Fuzzy Entropy of PLTSs

Definition 15 (See Liu et al., 2018).
Let $S=\{{s_{0}},\dots ,{s_{g/2}},\dots ,{s_{g}}\}$ be a linguistic term set. Given a PLTS $L(p)=\{{l_{i}}({p_{i}})|{l_{i}}\in S;\hspace{2.5pt}i=1,2,\dots ,\mathrm{\# }L(p)\}$, a fuzzy entropy of the PLTS ${\bar{E}_{F}}$ should satisfy the following properties:
  • (i) ${\bar{E}_{F}}\big({s_{0}}(1)\big)={\bar{E}_{F}}\big({s_{g}}(1)\big)=0$ further ${\bar{E}_{F}}\big({s_{0}}(p),{s_{g}}(1-p)\big)=0$;
  • (ii) ${\bar{E}_{F}}\big({s_{g/2}}(1)\big)=1$;
  • (iii) ${\bar{E}_{F}}\big(L{(p)^{(1)}}\big)\leqslant {\bar{E}_{F}}\big(L{(p)^{(2)}}\big)$ if ${l_{i}^{(1)}}\leqslant {l_{i}^{(2)}}\leqslant {s_{g/2}}$ or ${l_{i}^{(1)}}\geqslant {l_{i}^{(2)}}\geqslant {s_{g/2}}$ and ${P^{(1)}}={P^{(2)}}$, $\mathrm{\# }L{(P)^{(1)}}=\mathrm{\# }L{(P)^{(2)}}$, $i=1,2,\dots ,\mathrm{\# }L{(P)^{(1)}}$.
  • (iv) ${\bar{E}_{F}}\big(L(p)\big)={\bar{E}_{F}}\big(L{(p)^{(c)}}\big)$.
According to Definition 15, some properties of the fuzzy entropy ${\bar{E}_{F}}$ are not in accordance with intuitions (please see Remark 4). Therefore, we define a new fuzzy entropy of a PLTS with desirable properties as below.
Definition 16.
Let $S=\{{s_{-\tau }},\dots ,{s_{0}},{s_{1}},\dots ,{s_{\tau }}\}$ be a linguistic term set. $\bar{L}(p)$, ${\bar{L}_{1}}(p)$ and ${\bar{L}_{2}}(p)$ are normalized ordered PLTSs, where $\bar{L}(p)=\{{\bar{L}^{(k)}}({\bar{p}^{(k)}})|{\bar{L}^{(k)}}\in S,\hspace{2.5pt}{\bar{p}^{(k)}}\geqslant 0,\hspace{2.5pt}k=1,2,\dots ,\mathrm{\# }\bar{L}(p)\}$ and ${\bar{L}_{t}}(p)=\{{\bar{L}_{t}^{(k)}}({\bar{p}_{t}^{(k)}})|{\bar{L}_{t}^{(k)}}\in S,\hspace{2.5pt}{\bar{p}_{t}^{(k)}}\geqslant 0,\hspace{2.5pt}t=1,2;\hspace{2.5pt}k=1,2,\dots ,\mathrm{\# }\bar{L}(p),\mathrm{\# }\bar{L}(p)=\mathrm{\# }{\bar{L}_{1}}(p)=\mathrm{\# }{\bar{L}_{2}}(p)\}$. We call ${E_{F}}$ a fuzzy entropy of the PLTS if it satisfies the following properties:
  • (i) ${E_{F}}\big(\bar{L}(p)\big)=0\Leftrightarrow \bar{L}(p)=\big\{{s_{-\tau }}(1)\big\}\operatorname{or}\bar{L}(p)=\big\{{s_{\tau }}(1)\big\}$;
  • (ii) ${E_{F}}(\bar{L}(p))=1\Leftrightarrow e(\bar{L}(p))\hspace{2.5pt}=0.5$, where $e(\bar{L}(p))\hspace{2.5pt}={\textstyle\sum _{k=1}^{\mathrm{\# }\bar{L}(p)}}g({\bar{L}^{(k)}}){\bar{p}^{(k)}}$ is the expected function;
  • (iii) ${E_{F}}\big({\bar{L}_{1}}(p)\big)\leqslant {E_{F}}\big({\bar{L}_{2}}(p)\big)$ if ${\bar{L}_{1}}(p)$ and ${\bar{L}_{2}}(p)$ satisfy $e\big({\bar{L}_{1}}(p)\big)\leqslant e\big({\bar{L}_{2}}(p)\big)\leqslant 0.5$ or $e\big({\bar{L}_{1}}(p)\big)\geqslant e\big({\bar{L}_{2}}(p)\big)\geqslant 0.5$;
  • (iv) ${E_{F}}\big(\bar{L}{(p)^{(c)}}\big)={E_{F}}\big(\bar{L}(p)\big)$.
According to these properties, a general fuzzy entropy of PLTSs is defined below.
Theorem 4.
Suppose $f:[0,1]\to [0,1]$ is a strictly concave function and satisfies the following conditions:
  • (i) $f(t)=f(1-t)$ for any $t\in [0,1]$;
  • (ii) $f(1)=f(0)=0$;
  • (iii) f is monotone increasing in $[0,0.5]$ and monotone decreasing in $[0.5,1.0]$. Then the function
    (17)
    \[ {E_{F}}\big(\bar{L}(p)\big)=f\big(e\big(\bar{L}(p)\big)\big)\]
    is a fuzzy entropy of the PLTS $\bar{L}(p)$.
Proof.
Please see Appendix B.  □
Remark 4.
Compared with the fuzzy entropy ${\bar{E}_{F}}$, the proposed fuzzy entropy ${E_{F}}$ has the following merits:
(1) The properties of the latter are more consistent with intuitions. Let’s compare the property (i) of ${\bar{E}_{F}}$ with that of ${E_{F}}$. From (i) in Definition 15, we have ${\bar{E}_{F}}({s_{0}}(p),{s_{g}}(1-p))=0$. Specially, $p=0.5$, the equality ${\bar{E}_{F}}({s_{0}}(0.5),{s_{g}}(0.5))=0$ means that the fuzzy uncertainty of the PLTS ${L^{\ast }}(P)=\{{s_{0}}(0.5),{s_{g}}(0.5)\}$ is zero. However, in real decision making, the decision information represented by ${L^{\ast }}(P)$ is very fuzzy because ${L^{\ast }}(P)$ indicates that the probability of DM dissatisfying an alternative totally and that of DM satisfying an alternative completely are both 0.5. Therefore, the property ${\bar{E}_{F}}({s_{0}}(0.5),{s_{g}}(0.5))=0$ is counter-intuitive. However, according to the proposed fuzzy entropy ${E_{F}}$ defined in Eq. (17), one gets ${\bar{E}_{F}}({s_{0}}(p),{s_{g}}(1-p))=f(p)$. This indicates that the fuzzy entropy of ${L^{\ast }}(P)$ varies with p. Employing Theorem 4, we have ${E_{F}}({s_{0}}(0.5),{s_{g}}(0.5))=f(0.5)=1$. At this point, the proposed entropy ${E_{F}}$ is more consistent with intuition.
(2) The latter can be regarded as an extension of the former in some sense. It is worth mentioning that a normalized PLTS can be considered as a type of discrete random variables. Therefore, the expectation value $E(\bar{L}(P))$ represents the average level of $\bar{L}(P)$. Based on this idea, it is reasonable that the decision information described by $\bar{L}(P)$ and that of $\{{s_{g/2}}(1)\}$ in Definition 15 are equivalent if $E(\bar{L}(P))=0.5$. Hence, the property ${\bar{E}_{F}}({s_{g/2}}(1))=1$ in Definition 15 can be extended to the one ${E_{F}}(\bar{L}(p))=1\Leftrightarrow e(\bar{L}(p))\hspace{2.5pt}=0.5$ in Definition 16. Likewise, the property (iii) in Definition 15 is also extended into the one (iii) in Definition 16.
According to Theorem 4, let $f(t)=\frac{\ln (1+t-{t^{2}})}{\ln 5-\ln 4}$. A fuzzy entropy of a PLTS is defined as
(18)
\[ {E_{F}}\big(\bar{L}(p)\big)=\frac{\ln (1+e(\bar{L}(p))-{(e(\bar{L}(p)))^{2}})}{\ln 5-\ln 4}.\]

4.1.2 The Hesitancy Entropy of PLTSs

Hesitancy is an important feature of a PLTS. How to measure the hesitant uncertainty of a PLTS is often ignored. To fill in this gap, this subsection defines a hesitancy entropy measure of PLTSs by the deviations between the linguistic scale function values of possible linguistic terms in a PLTS and their mean value.
Definition 17.
Let $S=\{{s_{-\tau }},\dots ,{s_{0}},{s_{1}},\dots ,{s_{\tau }}\}$ be a linguistic term set. $\bar{L}(p)$, ${\bar{L}_{1}}(p)$ and ${\bar{L}_{2}}(p)$ are normalized ordered PLTSs, where $\bar{L}(p)=\{{L^{(k)}}({\bar{p}^{(k)}})|{L^{(k)}}\in S,\hspace{2.5pt}{\bar{p}^{(k)}}\geqslant 0,\hspace{2.5pt}k=1,2,\dots ,\mathrm{\# }\bar{L}(p)\}$ and ${\bar{L}_{t}}(p)=\{{L_{t}^{(k)}}({\bar{p}_{t}^{(k)}})|{L_{t}^{(k)}}\in S,\hspace{2.5pt}{\bar{p}_{t}^{(k)}}\geqslant 0,\hspace{2.5pt}t=1,2;\hspace{2.5pt}k=1,2,\dots ,\mathrm{\# }\bar{L}(p),\mathrm{\# }\bar{L}(p)=\mathrm{\# }{\bar{L}_{1}}(p)=\mathrm{\# }{\bar{L}_{2}}(p)\}$. We call ${E_{H}}$ a hesitancy entropy of a PLTS if it satisfies the following properties:
  • (i) ${E_{H}}\big(\bar{L}(p)\big)=0\Leftrightarrow \bar{L}(p)=\big\{{s_{\alpha }}(1)|\alpha =-\tau ,\dots ,0,1,\dots ,\tau \big\}$;
  • (ii) ${E_{H}}\big(\bar{L}(p)\big)=1\Leftrightarrow \bar{L}(p)=\big\{{s_{-\tau }}(0.5),{s_{\tau }}(0.5)\big\}$;
  • (iii) ${E_{H}}\big({\bar{L}_{1}}(p)\big)\to 0$ if $\bar{L}(p)=\big\{{L^{(1)}}(p),{L^{(2)}}(1-p)\big\}$ and ${L^{(1)}}\to {L^{(2)}}$.
  • (iv) ${E_{H}}\big({\bar{L}_{1}}(p)\big)\leqslant {E_{H}}\big({\bar{L}_{2}}(p)\big)$ if $h\big({\bar{L}_{1}}(p)\big)\leqslant h\big({\bar{L}_{2}}(p)\big)$ for any $i,j=1,2,\dots ,\mathrm{\# }\bar{L}(p)$.
  • (v) ${E_{H}}\big({\big(\bar{L}(p)\big)^{(c)}}\big)={E_{H}}\big(\bar{L}(p)\big)$.
Theorem 5.
Given a normalized ordered PLTS $\bar{L}(p)$. The hesitancy index
(19)
\[ {E_{h}}\big(\bar{L}(p)\big)=h\big(\bar{L}(p)\big)=\left\{\begin{array}{l@{\hskip4.0pt}l}2{\textstyle\textstyle\sum _{i=1}^{\mathrm{\# }\bar{L}(p)}}|g({L^{(i)}})-\bar{g}|{p_{i}}\hspace{1em}& \mathrm{\# }\bar{L}(p)>1,\\ {} 0\hspace{1em}& \mathrm{\# }\bar{L}(p)=1,\end{array}\right.\]
is a hesitancy entropy, where $\bar{g}=\frac{1}{\mathrm{\# }\bar{L}(p)}{\textstyle\sum _{i=1}^{\mathrm{\# }\bar{L}(p)}}g({L^{(i)}})$.
Proof.
Please see Appendix C.  □
Remark 5.
Let $S=\{{s_{0}},\dots ,{s_{g/2}},\dots ,{s_{g}}\}$ be a linguistic term set. Given a PLTS $L(p)=\{{l_{i}}({p_{i}})|{l_{i}}\in S;\hspace{2.5pt}i=1,2,\dots ,\mathrm{\# }L(p)\}$, Liu et al. (2018) defined a hesitancy entropy measure as follows:
(20)
\[ {\bar{E}_{h}}\big(\bar{L}(p)\big)=\left\{\begin{array}{l@{\hskip4.0pt}l}{\textstyle\textstyle\sum _{i=1}^{\mathrm{\# }\bar{L}(p)-1}}{\textstyle\textstyle\sum _{j=i+1}^{\mathrm{\# }\bar{L}(p)}}4{p_{i}}{p_{j}}f({\gamma _{ij}})\hspace{1em}& \mathrm{\# }\bar{L}(p)>1,\\ {} 0\hspace{1em}& \mathrm{\# }\bar{L}(p)=1,\end{array}\right.\]
where ${\gamma _{ij}}=\frac{|I({l_{i}})-I({l_{j}})|}{g}$ and $I({l_{i}})$ is the subscript of the linguistic term ${l_{i}}$.
Compared ${\bar{E}_{h}}$ with ${E_{h}}$, the difference between them is that the former measures the deviations between different linguistic terms in a PLTS, while the latter measures the deviations between all the transformation function values of possible linguistic terms and their mean value. Although the expressions of ${\bar{E}_{h}}$ and ${E_{h}}$ are different, they have similar properties (please see Definition 5 in Liu et al. (2018). However, the computation of ${E_{h}}$ needs $\mathrm{\# }\bar{L}(P)+1$ times, while ${\bar{E}_{h}}$ needs $\mathrm{\# }\bar{L}(P)(\mathrm{\# }\bar{L}(P)-1)/2$ times. Thus, the computations of ${E_{h}}$ is more time-saving when $\mathrm{\# }\bar{L}(P)\geqslant 4$.
Combining the fuzzy entropy and hesitancy entropy with an adjusted coefficient θ, a total entropy is determined as
(21)
\[ {E_{T}}\big(\bar{L}(p)\big)=\theta {E_{F}}\big(\bar{L}(p)\big)+(1-\theta ){E_{H}}\big(\bar{L}(p)\big).\]
As $0\leqslant {E_{F}}(\bar{L}(p))$, ${E_{H}}(\bar{L}(p))\leqslant 1$, one has $0\leqslant {E_{T}}(\bar{L}(p))\leqslant 1$.
Furthermore, the total entropy of a probabilistic linguistic matrix is derived.
Definition 18.
Let $\bar{\boldsymbol{U}}={({\bar{L}_{ij}}(p))_{m\times n}}$ be a matrix with PLTSs. The total entropy of $\bar{\boldsymbol{U}}$ is defined as
(22)
\[ {E_{T}}(\bar{\boldsymbol{U}})={\sum \limits_{i=1}^{m}}{\sum \limits_{j=1}^{n}}{E_{T}}(\big({\bar{L}_{ij}}(p)\big).\]
Remark 6.
Lin et al. (2019) proposed an information entropy as
(23)
\[ \mu \big(L(p)\big)=-{\sum \limits_{k=1}^{\mathrm{\# }L(p)}}{p^{(k)}}{\log _{2}}{p^{(k)}},\]
where $L(p)$ is a PLTS, $\mathrm{\# }L(p)$ is the number of all different linguistic terms in $L(p)$ and ${p^{(k)}}$ is the probability of kth possible linguistic value of $L(p)$.
It is obvious from Eq. (23) that this information entropy only considered the probabilities of possible linguistic terms in a PLTS, but ignored all possible linguistic terms. However, the total entropy presented in this paper takes them into consideration together. Therefore, compared with $\mu (L(p))$ in Eq. (23), the presented total entropy ${E_{T}}(\bar{L}(p))$ has a stronger distinguishing power while measuring the uncertainty of a PLTS, which can be verified by Example 2.
Example 2.
Given two PLTSs ${L_{1}}(p)=\{{s_{-3}}(0.30),{s_{0}}(0.25),{s_{3}}(0.45)\}$ and ${L_{2}}(p)=\{{s_{1}}(0.30),{s_{2}}(0.25),{s_{3}}(0.45)\}$. Obviously, the uncertainty of ${L_{1}}(p)$ is larger than that of ${L_{2}}(p)$. In fact, the total entropy is calculated as ${E_{T}}({L_{1}}(p))=0.552>{E_{T}}({L_{2}}(p))=0.398$, which is in accordance with our intuition. However, by Eq. (23), it obtains that $\mu ({L_{1}}(p))=\mu ({L_{2}}(p))=1.5395$, by which the uncertainties of ${L_{1}}(p)$ and ${L_{2}}(p)$ are considered the same. In other words, $\mu (L(p))$ in Eq. (23) is unable to distinguish the degrees of uncertainty of ${L_{1}}(p)$ and ${L_{2}}(p)$. Therefore, the total entropy proposed in this paper has a stronger distinguishing power.

4.2 The Cross Entropy of PLTSs

In this subsection, to measure the discrimination degree between PLTSs, a cross entropy is defined. Motivated by the cross entropy in the hesitant fuzzy linguistic environment (Gou et al., 2017), the axiomatic definition of cross-entropy measure for PLTSs is given below.
Definition 19.
Suppose $S=\{{s_{-\tau }},\dots ,{s_{0}},{s_{1}},\dots ,{s_{\tau }}\}$ is a linguistic term set. ${\bar{L}_{1}}(p)$ and ${\bar{L}_{2}}(p)$ are two normalized ordered PLTSs with $\mathrm{\# }{\bar{L}_{1}}(p)=\mathrm{\# }{\bar{L}_{2}}(p)$, where ${\bar{L}_{t}}(p)=\{{\bar{L}_{t}^{(k)}}({\bar{p}_{t}^{(k)}})|{\bar{L}_{t}^{(k)}}\in S,\hspace{2.5pt}{\bar{p}_{t}^{(k)}}\geqslant 0,\hspace{2.5pt}t=1,2;\hspace{2.5pt}k=1,2,\dots ,\mathrm{\# }{\bar{L}_{1}}(p)\}$. Then the cross entropy $CE({\bar{L}_{1}}(p),{\bar{L}_{2}}(p))$ of ${\bar{L}_{1}}(p)$ and ${\bar{L}_{2}}(p)$ should satisfy the following conditions:
  • (i) $CE\big({\bar{L}_{1}}(p),{\bar{L}_{2}}(p)\big)\geqslant 0$;
  • (ii) $CE({\bar{L}_{1}}(p),{\bar{L}_{2}}(p))=0$ if and only if $e({\bar{L}_{1}}(p))=e({\bar{L}_{2}}(p))$.
Theorem 6.
Given two normalized ordered PLTSs ${\bar{L}_{1}}(p)$ and ${\bar{L}_{2}}(p)$, the function
(24)
\[\begin{aligned}{}CE\big({\bar{L}_{1}}(p),{\bar{L}_{2}}(p)\big)& =e\big({\bar{L}_{1}}(p)\big)\ln \frac{2e({\bar{L}_{1}}(p))}{e({\bar{L}_{1}}(p))+e({\bar{L}_{2}}(p))}\\ {} & \hspace{1em}+\big(1-e\big({\bar{L}_{1}}(p)\big)\big)\ln \frac{2(1-e({\bar{L}_{1}}(p)))}{1-e({\bar{L}_{1}}(p))+1-e({\bar{L}_{2}}(p))}\end{aligned}\]
is a cross entropy.
Proof.
Please see Appendix D.  □
It is observed from Eq. (24) that the cross entropy, $CE({\bar{L}_{1}}(p),{\bar{L}_{2}}(p))$, is not symmetric. A symmetric cross entropy between PLTSs can be obtained as
(25)
\[ D\big({\bar{L}_{1}}(p),{\bar{L}_{2}}(p)\big)=CE\big({\bar{L}_{1}}(p),{\bar{L}_{2}}(p)\big)+CE\big({\bar{L}_{2}}(p),{\bar{L}_{1}}(p)\big).\]
Moreover, a symmetric cross entropy between two probabilistic linguistic matrices is defined below.
Definition 20.
Let ${\bar{\boldsymbol{U}}_{1}}={({\bar{L}_{ij}^{1}}(p))_{m\times n}}$ and ${\bar{\boldsymbol{U}}_{2}}={({\bar{L}_{ij}^{2}}(p))_{m\times n}}$ be two matrices with PLTSs. The symmetrical cross entropy between ${\bar{\boldsymbol{U}}_{1}}$ and ${\bar{\boldsymbol{U}}_{2}}$ is defined as
(26)
\[ D({\bar{\boldsymbol{U}}_{1}},{\bar{\boldsymbol{U}}_{2}})={\sum \limits_{i=1}^{m}}{\sum \limits_{j=1}^{n}}D\big({\bar{L}_{ij}^{1}}(p),{\bar{L}_{ij}^{2}}(p)\big).\]

5 A Novel Method for MAGDM with PLTSs

In this section, we first describe the problems of MAGDM with PLTSs, and then propose a novel method for solving such problems.

5.1 Problem Description

Let $A=\{{A_{1}},{A_{2}},\dots ,{A_{m}}\}$ be the set of m feasible alternatives, $U=\{{u_{1}},{u_{2}},\dots ,{u_{n}}\}$ be the set of attributes whose weights are $\boldsymbol{W}={({w_{1}},{w_{2}},\dots ,{w_{n}})^{\mathrm{T}}}$ with ${\textstyle\sum _{j=1}^{n}}{w_{j}}=1$, and $E=\{{e_{1}},{e_{2}},\dots ,{e_{t}}\}$ be the set of DMs whose weights are $\boldsymbol{\lambda }={({\lambda _{1}},{\lambda _{2}},\dots ,{\lambda _{t}})^{\mathrm{T}}}$ satisfying ${\textstyle\sum _{k=1}^{t}}{\lambda _{k}}=1$. DM ${e_{k}}$ ($k=1,2,\dots ,t$) provides his/her evaluations on alternative ${A_{i}}$ with respect to attribute ${u_{j}}$ is in the form of PLTS ${L_{ij}^{k}}(p)$ ($i=1$, $2,\dots ,m;j=1$, $2,\dots ,n;k=1,2,\dots ,t$) based on the linguistic term set $S=\{{s_{\alpha }}|\alpha =-\tau ,\dots ,-1,0,1,\dots ,\tau \}$. All evaluations construct the decision matrices ${\boldsymbol{U}_{k}}={({L_{ij}^{k}}(p))_{m\times n}}$ $(k=1,2,\dots ,t)$. Denote the normalized ordered decision matrices by ${\bar{\boldsymbol{U}}_{k}}={({\bar{L}_{ij}^{k}}(p))_{m\times n}}$ $(k=1,2,\dots ,t)$.

5.2 Determine DMs’ Weights Based on the Total Entropy and the Symmetric Cross Entropy

In this subsection, an approach is developed to determine DMs’ weights objectively by using the proposed total entropy and symmetric cross entropy of PLTSs. As we know, the less uncertainty (i.e. a smaller total entropy) of an individual matrix provided by a DM, the better the quality of decision information reflected by this matrix is. Thus, the bigger the weight of this DM should be assigned. In virtue of this criterion, a programming model for deriving DMs’ weights is built by minimizing the total entropy of decision matrices, i.e.
(27)
\[ (M-1)\left\{\begin{array}{l}\min F({\lambda _{1}^{k}})=\frac{1}{t}{\textstyle\textstyle\sum _{k=1}^{t}}{E_{T}}({\bar{\boldsymbol{U}}_{k}}){\lambda _{1}^{k}},\hspace{1em}\\ {} \text{s.t.}\hspace{2.5pt}{\textstyle\textstyle\sum _{k=1}^{t}}{({\lambda _{1}^{k}})^{2}}=1,\hspace{1em}\end{array}\right.\]
where ${E_{T}}({\bar{\boldsymbol{U}}_{k}})$ are the total entropy of ${\bar{\boldsymbol{U}}_{k}}$ $(k=1,2,\dots ,t)$.
Solving Eq. (27) with Lagrange multiplier approach, it is obtained as
(28)
\[ {\lambda _{1}^{\ast k}}=\frac{{E_{T}}({\bar{\boldsymbol{U}}_{k}})}{\sqrt{{\textstyle\textstyle\sum _{k=1}^{t}}{({E_{T}}({\bar{\boldsymbol{U}}_{k}}))^{2}}}}.\]
Normalizing ${\lambda _{1}^{\ast k}}$, one has
(29)
\[ {\lambda _{1}^{k}}=\frac{{E_{T}}({\bar{\boldsymbol{U}}_{k}})}{{\textstyle\textstyle\sum _{k=1}^{t}}{E_{T}}({\bar{\boldsymbol{U}}_{k}})}\hspace{1em}(k=1,2,\dots ,t).\]
On the other hand, a closer degree of a DM to other DMs means that the information supplied by this DM is much closer to that of the group. In this case, this DM should be assigned a larger weight. From this viewpoint, another optimal model for determining DMs’ weights is constructed based on the symmetric cross entropy as
(30)
\[ (M-2)\left\{\begin{array}{l}\min F({\lambda _{2}^{k}})=\frac{1}{t-1}{\textstyle\textstyle\sum _{k=1}^{t}}{\textstyle\textstyle\sum _{\delta =1,\delta \ne k}^{t}}D({\bar{\boldsymbol{U}}_{k}},{\bar{\boldsymbol{U}}_{\delta }}){\lambda _{2}^{k}},\\ {} \text{s.t.}\hspace{2.5pt}{\textstyle\textstyle\sum _{k=1}^{t}}{({\lambda _{2}^{k}})^{2}}=1.\end{array}\right.\]
Eq. (30) can also be solved with Lagrange multiplier approach, and the optimal solutions are derived as
(31)
\[ {\lambda _{2}^{\ast k}}=\frac{\frac{1}{t-1}{\textstyle\textstyle\sum _{\delta =1,\delta \ne k}^{t}}D({\bar{\boldsymbol{U}}_{k}},{\bar{\boldsymbol{U}}_{\delta }})}{\sqrt{{\textstyle\textstyle\sum _{k=1}^{t}}{(\frac{1}{t-1}{\textstyle\textstyle\sum _{\delta =1,\delta \ne k}^{t}}D({\bar{\boldsymbol{U}}_{k}},{\bar{\boldsymbol{U}}_{\delta }}))^{2}}}}\hspace{1em}(k=1,2,\dots ,t).\]
Normalizing ${\lambda _{2}^{\ast k}}$, one gets
(32)
\[ {\lambda _{2}^{k}}=\frac{{\textstyle\textstyle\sum _{\delta =1,\delta \ne k}^{t}}D({\bar{\boldsymbol{U}}_{k}},{\bar{\boldsymbol{U}}_{\delta }})}{{\textstyle\textstyle\sum _{k=1}^{t}}{\textstyle\textstyle\sum _{\delta =1,\delta \ne k}^{t}}D({\bar{\boldsymbol{U}}_{k}},{\bar{\boldsymbol{U}}_{\delta }})}\hspace{1em}(k=1,2,\dots ,t).\]
Combining Eq. (29) and Eq. (32), the ultimate DMs’ weights ${\lambda _{k}}$ are determined as
(33)
\[ {\lambda _{k}}=\beta {\lambda _{1}^{k}}+(1-\beta ){\lambda _{2}^{k}}\hspace{1em}(k=1,2,\dots ,t),\]
where $\beta \in [0,1]$ is a compromise coefficient.

5.3 Construct Bi-Objective Programs For Deriving Attribute Weights

In group decision making process, attribute weights play an important role because different attribute weights may result in diverse ranking of alternatives. In this subsection, considering the information of attribute weights is completely unknown or partially known, two bi-objective programs are constructed respectively for deriving attribute weights.
(i) Aggregating individual decision matrices into a collective one
By employing DMs’ weights determined in Section 5.2, individual decision matrices are integrated into a collective one with the proposed PLWAM operator. Denote the collective decision matrix by $\boldsymbol{U}={({L_{ij}}(p))_{m\times n}}$, where
(34)
\[ {L_{ij}}(p)=\text{PLWAM}\big({L_{ij}^{1}}(p),{L_{ij}^{2}}(p),\dots ,{L_{ij}^{n}}(p)\big).\]
Then, the normalized ordered collective decision matrix $\bar{\boldsymbol{U}}={({\bar{L}_{ij}}(p))_{m\times n}}$ is obtained from $\boldsymbol{U}={({L_{ij}}(p))_{m\times n}}$ by Definition 4, where ${\bar{L}_{ij}}(p)=\{{\bar{L}_{ij}^{(k)}}({\bar{p}_{ij}^{(k)}})|{\bar{L}_{ij}^{(k)}}\in S,\hspace{2.5pt}{\bar{p}_{ij}^{(k)}}\geqslant 0,\hspace{2.5pt}k=1,2,\dots ,\mathrm{\# }{\bar{L}_{ij}}(p)\}$.
(ii) Constructing bi-objective programs for deriving attribute weights
It is known to us that an attribute plays a more important role if the performance values of alternatives on it have distinct differences. Thus, such an attribute should be given a bigger weight. Conversely, if the evaluation values of alternatives with respect to a certain attribute have little difference, this attribute should be given a smaller weight. In addition, the credibility of decision information on an attribute should also be taken into account while determining attribute weights. The more credibility of evaluation values on an attribute, the bigger weight of this attribute should be assessed. As mentioned before, the symmetric cross entropy and the total entropy can reflect the differences between alternatives and the credibility of evaluation values, respectively. Keeping this idea in mind, we can determine attribute weights by maximizing the symmetric cross entropy as well as minimizing the total entropy. Thus, a bi-objective program is established if the information of attribute weights is completely unknown, i.e.
(35)
\[ (M-3)\left\{\begin{array}{l}\max F(\boldsymbol{w})={\textstyle\textstyle\sum _{j=1}^{m}}{F_{j}}(\boldsymbol{w})\\ {} \hspace{1em}={\textstyle\textstyle\sum _{j=1}^{m}}({\textstyle\textstyle\sum _{i=1}^{n}}{\textstyle\textstyle\sum _{r=1,r\ne i}^{n}}D({\bar{L}_{ij}}(p),{\bar{L}_{rj}}(p))){w_{j}},\\ {} \min T(\boldsymbol{w})={\textstyle\textstyle\sum _{j=1}^{m}}{T_{j}}(\boldsymbol{w})={\textstyle\textstyle\sum _{j=1}^{m}}({\textstyle\textstyle\sum _{i=1}^{n}}{E_{T}}({\bar{L}_{ij}}(p)){w_{j}}),\\ {} \text{s.t.}\hspace{2.5pt}{\textstyle\textstyle\sum _{k=1}^{t}}{({w_{j}})^{2}}=1.\end{array}\right.\]
As $0\leqslant {E_{T}}({\bar{L}_{ij}}(p))\leqslant 1$, Eq. (35) can be converted into the following single objective program:
(36)
\[ (M-4)\left\{\begin{array}{l}\max F(\boldsymbol{w})={\textstyle\textstyle\sum _{j=1}^{m}}{F_{j}}(\boldsymbol{w})\\ {} \hspace{1em}={\textstyle\textstyle\sum _{j=1}^{m}}{\textstyle\textstyle\sum _{i=1}^{n}}({\textstyle\textstyle\sum _{r=1,r\ne i}^{n}}D({\bar{L}_{ij}}(p),{\bar{L}_{rj}}(p))+(1-{E_{T}}({\bar{L}_{ij}}(p)))){w_{j}},\\ {} \text{s.t.}\hspace{2.5pt}{\textstyle\textstyle\sum _{k=1}^{t}}{({w_{j}})^{2}}=1.\end{array}\right.\]
Solving Eq. (36) with Lagrange multiplier approach, the weights of attributes are derived as
(37)
\[\begin{aligned}{}& {w_{j}^{\ast }}=\frac{{\textstyle\textstyle\sum _{i=1}^{n}}({\textstyle\textstyle\sum _{r=1,r\ne i}^{n}}D({\bar{L}_{ij}}(p),{\bar{L}_{rj}}(p))+(1-{E_{T}}({\bar{L}_{ij}}(p))))}{\sqrt{{\textstyle\textstyle\sum _{j=1}^{m}}({\textstyle\textstyle\sum _{i=1}^{n}}({\textstyle\textstyle\sum _{r=1,r\ne i}^{n}}D({\bar{L}_{ij}}(p),{\bar{L}_{rj}}(p)))+(1-{E_{T}}({\bar{L}_{ij}}{(p)))))^{2}}}}\\ {} & \hspace{1em}(j=1,2,\dots ,n).\end{aligned}\]
The normalized weights ${w_{j}}$ are obtained by normalizing ${w_{j}^{\ast }}$, i.e.
(38)
\[\begin{aligned}{}& {w_{j}}=\frac{{\textstyle\textstyle\sum _{i=1}^{n}}({\textstyle\textstyle\sum _{r=1,r\ne i}^{n}}D({\bar{L}_{ij}}(p),{\bar{L}_{rj}}(p))+(1-{E_{T}}({\bar{L}_{ij}}(p))))}{{\textstyle\textstyle\sum _{j=1}^{m}}({\textstyle\textstyle\sum _{i=1}^{n}}({\textstyle\textstyle\sum _{r=1,r\ne i}^{n}}D({\bar{L}_{ij}}(p),{\bar{L}_{rj}}(p))+(1-{E_{T}}{\bar{L}_{ij}}(p))))}\\ {} & \hspace{1em}(j=1,2,\dots ,n).\end{aligned}\]
Similarly, for the situations where the information of attribute weights is incomplete, another programming model is obtained by modifying Eq. (36), i.e.
(39)
\[ (M-5)\left\{\begin{array}{l}\max F(\boldsymbol{w})={\textstyle\textstyle\sum _{j=1}^{m}}{F_{j}}(\boldsymbol{w})\\ {} \hspace{1em}={\textstyle\textstyle\sum _{j=1}^{m}}{\textstyle\textstyle\sum _{i=1}^{n}}(\frac{1}{n-1}{\textstyle\textstyle\sum _{r=1,r\ne i}^{n}}D({\bar{L}_{ij}}(p),{\bar{L}_{rj}}(p))\\ {} \hspace{2em}+(1-{E_{T}}({\bar{L}_{ij}}(p)))){w_{j}},\\ {} \text{s.t.}\hspace{2.5pt}{w_{j}}\in \Omega ;\hspace{2.5pt}{\textstyle\textstyle\sum _{k=1}^{t}}{({w_{j}})^{2}}=1,\end{array}\right.\]
where Ω represents the incomplete attribute weight information. Solving Eq. (39) with Lingo software, the corresponding optimal solution vector ${\boldsymbol{w}^{\ast }}={({w_{1}^{\ast }},{w_{2}^{\ast }},\dots ,{w_{2}^{\ast }})^{\mathrm{T}}}$ is generated and then normalized as the attribute weight vector $\boldsymbol{w}={({w_{1}},{w_{2}},\dots ,{w_{n}})^{\mathrm{T}}}$.

5.4 Ranking Alternatives by an Improved PL-PROMETHEE Method

The PROMETHEE method is a popular outranking method in decision making. It has been extended to different fuzzy environments, such as intuitionistic fuzzy sets (Krishankumar et al., 2017) and linguistic variables Halouani et al. (2009). In the PLTS environment, although Xu et al. (2019) proposed a PL-PROMETHEE method, the distinguishing power of PL-PROMETHEE is not high, which can be verified by Example 3 in the sequel. To improve the distinguishing power, this subsection constructs new probabilistic linguistic preference relations, the integrated preference index and positive/negative outranking flows to adapt the main framework of PROMETHEE in the PLTS context. Thus, an improved PL-PROMETHEE method is proposed to solve MAGDM problems with PLTSs.
(i) The probabilistic linguistic preference function
Suppose that ${A_{i}}$ and ${A_{r}}$ are two alternatives in the alternative set. ${\bar{L}_{ij}}(p)$ and ${\bar{L}_{rj}}(p)$ are respectively normalized ordered attribute values of ${A_{i}}$ and ${A_{r}}$ with respect to attribute ${u_{j}}$. Preference function ${P_{X}^{j}}({A_{i}},{A_{r}})$ of ${\bar{L}_{ij}}(p)$ with respect to ${\bar{L}_{rj}}(p)$ is defined as
(40)
\[ {P_{X}^{j}}({A_{i}},{A_{r}})={L_{jir}}(p)=\bigcup \limits_{\genfrac{}{}{0pt}{}{{k_{1}}=1,2,\dots ,\mathrm{\# }{\bar{L}_{ij}}(p)}{{k_{2}}=1,2,\dots ,\mathrm{\# }{\bar{L}_{rj}}(p)}}\big\{P\big({L_{ij}^{({k_{1}})}},{L_{rj}^{({k_{2}})}}\big)\big\}\hspace{1em}(i\ne r),\]
where
(41)
\[ P\big({L_{ij}^{({k_{1}})}},{L_{rj}^{({k_{2}})}}\big)=\left\{\begin{array}{l}{s_{-\tau }}({\bar{p}_{ij}^{({k_{1}})}}{\bar{p}_{rj}^{({k_{2}})}})\hspace{1em}\text{if}\hspace{2.5pt}g({\bar{L}_{ij}^{({k_{1}})}}){\bar{p}_{ij}^{({k_{1}})}}-g({\bar{L}_{rj}^{({k_{2}})}}){\bar{p}_{rj}^{({k_{2}})}}\leqslant q,\\ {} {g^{-1}}(\frac{g({\bar{L}_{ij}^{({k_{1}})}}){\bar{p}_{ij}^{({k_{1}})}}-g({\bar{L}_{rj}^{({k_{2}})}}){\bar{p}_{rj}^{({k_{2}})}}}{p})({\bar{p}_{ij}^{({k_{1}})}}{\bar{p}_{rj}^{({k_{2}})}})\\ {} \hspace{1em}\text{if}\hspace{2.5pt}q<g({\bar{L}_{ij}^{({k_{1}})}}){\bar{p}_{ij}^{({k_{1}})}}-g({\bar{L}_{rj}^{({k_{2}})}}){\bar{p}_{rj}^{({k_{2}})}}\leqslant p,\\ {} {s_{\tau }}({\bar{p}_{ij}^{({k_{1}})}}{\bar{p}_{rj}^{({k_{2}})}})\hspace{1em}\text{if}\hspace{2.5pt}g({\bar{L}_{ij}^{({k_{1}})}}){\bar{p}_{ij}^{({k_{1}})}}-g({\bar{L}_{rj}^{({k_{2}})}}){\bar{p}_{rj}^{({k_{2}})}}>p.\end{array}\right.\]
In Eq. (41), parameters $p(\geqslant 0)$ and $q(\geqslant 0)$ are the indifference threshold and the strict preference threshold, respectively. Functions $g(x)$ and ${g^{-1}}(x)$ are defined in Definition 2.
Obviously, the preference function ${P_{X}^{j}}({A_{i}},{A_{r}})$ constructed by Eqs. (40) and (41) is a PLTS.
(ii) The integrated preference index
According to the probabilistic linguistic preference index ${P_{X}^{j}}({A_{i}},{A_{r}})$, an integrated preference index is defined as
(42)
\[ {\pi _{X}}({A_{i}},{A_{r}})={\sum \limits_{j=1}^{n}}{w_{j}}T\big({P_{X}^{j}}({A_{i}},{A_{r}})\big),\]
where $T({P_{j}}({A_{i}},{A_{r}}))$ is the closeness degrees of ${P_{X}^{j}}({A_{i}},{A_{r}})$ and computed by Eq. (16).
(iii) The positive and negative outranking flows
Employing the integrated preference indices, the positive/negative outranking flows, ${\phi ^{+}}({A_{i}})$ and ${\phi ^{-}}({A_{i}})$, are defined as follows.
(43)
\[\begin{aligned}{}& {\phi ^{+}}({A_{i}})=\frac{1}{m-1}{\sum \limits_{r=1,r\ne i}^{m}}{\pi _{X}}({A_{i}},{A_{r}}).\end{aligned}\]
(44)
\[\begin{aligned}{}& {\phi ^{-}}({A_{i}})=\frac{1}{m-1}{\sum \limits_{r=1,r\ne i}^{m}}{\pi _{X}}({A_{r}},{A_{i}}).\end{aligned}\]
(iv) Ranking alternatives.
In virtue of Eqs. (43) and (44), the net flow of alternative ${A_{i}}$ is calculated as
(45)
\[ \phi ({A_{i}})={\phi ^{+}}({A_{i}})-{\phi ^{-}}({A_{i}}).\]
Finally, alternatives are ranked by the descending order of net flows.
Remark 7.
In PL-PROMETHEE method (Xu et al., 2019), the preference function and the total preference index are defined respectively as
(46)
\[ {P_{j}}({A_{i}},{A_{r}})=\left\{\begin{array}{l@{\hskip4.0pt}l}0,\hspace{1em}& p({A_{i,j}}>{A_{r,j}})\leqslant 0.5,\\ {} 1,\hspace{1em}& p({A_{i,j}}>{A_{r,j}})>0.5\end{array}\right.\]
and
(47)
\[ \pi ({A_{i}},{A_{r}})={\sum \limits_{j=1}^{m}}{P_{j}}({A_{i}},{A_{r}}){w_{j}},\]
where $P({A_{i,j}}>{A_{r,j}})$ is the probability of ${A_{i}}$ preferred to ${A_{r}}$ with respect to ${u_{j}}$.
Obviously, the value of the preference function ${P_{j}}({A_{i}},{A_{r}})$ in Eq. (46) only takes 0 or 1, which may result in the loss of decision information. However, the preference function ${P_{X}^{j}}({A_{i}},{A_{r}})$ in Eq. (40) is described by the PLTS which is more flexible than the crisp number for representing decision information. Therefore, compared with PL-PROMETHEE, the distinguishing power of the improved PL-PROMETHEE is stronger and the sensitivity to the ratings of alternatives on attributes is increased. To illustrate this advantage, Example 3 is given below.
Example 3.
For convenience, we only consider one of attributes in a real problem. Suppose that the ratings of three alternatives on this attribute are ${L_{1}}(p)=\{{s_{2}}(0.35),{s_{3}}(0.25),{s_{4}}(0.40)\}$, ${L_{2}}(p)=\{{s_{-3}}(0.70),{s_{-2}}(0.30)\}$ and ${L_{3}}(p)=\{{s_{1}}(0.40),{s_{2}}(0.25),{s_{4}}(0.35)\}$, respectively. Intuitively, the relation ${L_{1}}(p)\succ {L_{3}}(p)\succ {L_{2}}(p)$ holds, where the symbol “≻” means “preferred to”. Hence, it is deduced that the degree of ${L_{1}}(p)$ preferred to ${L_{2}}(p)$ should be more than that of ${L_{1}}(p)$ preferred to ${L_{3}}(p)$. In fact, take $q=0$ and $p=0.5$ in the proposed preference function (i.e. Eq (41)), the preference values are calculated as ${P_{X}^{1}}({A_{1}},{A_{2}})=\{{s_{-1.9}}(0.175),{s_{-1.7}}(0.075),{s_{-1.2}}(0.245),{s_{-1}}(0.105),{s_{1}}(0.280),{s_{1.2}}(0.120)\}$ and ${P_{X}^{1}}({A_{1}},{A_{3}})=\{{s_{-4}}(0.31),{s_{-3.8}}(0.140),{s_{-3.5}}(0.063),{s_{-3.2}}(0.14),{s_{-2.8}}(0.088),{s_{-1.6}}(0.160),{s_{-0.6}}(0.100)\}$. The preference index is calculated as ${\pi _{X}}({A_{1}},{A_{2}})=0.455>{\pi _{X}}({A_{1}},{A_{3}})=0.306$. This result is in line with human intuition. However, using the preference function and preference index in method (Xu et al., 2019) (i.e. Eqs. (46) and (47)), one obtains ${P_{1}}({A_{1}},{A_{2}})={P_{1}}({A_{1}},{A_{3}})=1$ and ${\pi _{12}}={\pi _{13}}=1$, which are not consistent with the above analysis. In other words, method (Xu et al., 2019) has no ability to distinguish alternatives ${A_{2}}$ and ${A_{3}}$. Therefore, the improved PL-PROMETHEE method has a stronger distinguishing power than the method (Xu et al., 2019). Furthermore, the sensitivity of the improved PL-PROMETHEE method is strong, too. For example, when ${L_{2}}(p)$ is increased to ${L^{\prime }_{2}}(p)=\{{s_{2}}(0.80),{s_{3}}(0.20)\}$ while ${L_{1}}(p)$ and ${L_{3}}(p)$ remain unchanged, the corresponding preference index ${\pi _{X}}({A_{1}},{A_{2}})$ decreases to 0.271 from 0.455, whereas ${\pi _{12}}$ is not changed.

5.5 A Novel Method for MAGDM with Probabilistic Linguistic Information

A novel method is generated for MAGDM problems with PLTSs. The main procedure of this method is outlined below.
Step 1. Each DM establishes his/her individual probabilistic linguistic matrices ${\boldsymbol{U}_{k}}={({L_{ij}^{k}}(p))_{m\times n}}$ ($k=1,2,\dots ,t$). Further, matrices ${\boldsymbol{U}_{k}}$ are transformed into corresponding normalized ordered matrices ${\bar{\boldsymbol{U}}_{k}}={({\bar{L}_{ij}^{k}}(p))_{m\times n}}$ $(k=1,2,\dots ,t)$ by Definitions 4-6.
Step 2. Calculate the total entropy ${E_{T}}({\bar{\boldsymbol{U}}_{k}})$ ($k=1,2,\dots ,t$) and the symmetric cross entropy $D({\bar{\boldsymbol{U}}_{k}},{\bar{\boldsymbol{U}}_{\delta }})$ ($k,\delta =1,2,\dots ,t;k\ne \delta $) by Eqs. (18), (19), (21), (24) and (25).
Step 3. Determine the weight vector of DMs $\boldsymbol{\lambda }={({\lambda _{1}},{\lambda _{2}},\dots ,{\lambda _{t}})^{\mathrm{T}}}$ by Eqs. (29), (32) and (33).
Step 4. Aggregate all matrices ${\bar{\boldsymbol{U}}_{k}}={({L_{ij}^{k}}(p))_{m\times n}}$ ($k=1,2,\dots ,t$) into a collective one $\boldsymbol{U}={({L_{ij}}(p))_{m\times n}}$ by the PLWAM operator (Eq. (4)), and then transform matrix $\boldsymbol{U}$ into a normalized ordered matrix $\bar{\boldsymbol{U}}={({\bar{L}_{ij}}(p))_{m\times n}}$.
Step 5. Calculate the entropy ${E_{T}}({\bar{L}_{ij}}(p))$ ($i=1$, $2,\dots ,m;j=1,2,\dots ,n$) and the symmetric cross entropy $D({\bar{L}_{ij}}(p),{\bar{L}_{rj}}(p))$ ($i,r=1,2,\dots ,m$; $r\ne i$; $j=1,2,\dots ,n$) by Eqs. (18), (19), (21), (24) and (25).
Step 6. Determine the weight vector of attributes $\boldsymbol{w}={({w_{1}},{w_{2}},\dots ,{w_{n}})^{\mathrm{T}}}$ by Eq. (38) or (39).
Step 7. Compute the preference function values ${P_{X}^{j}}({A_{i}},{A_{r}})$ ($i,r=1,2,\dots ,m$; $r\ne i$; $j=1,2,\dots ,n$) by Eqs. (40) and (41).
Step 8. Obtain the total preference index matrix $\boldsymbol{\pi }={({\pi _{X}}({A_{i}},{A_{r}}))_{m\times m}}$ by Eq. (42).
Step 9. Calculate the positive and negative flows ${\phi ^{+}}({A_{i}})$ and ${\phi ^{-}}({A_{i}})$ ($i=1,2,\dots ,m$) based on Eqs. (43) and (44).
Step 10. Determine net flows $\phi ({A_{i}})$ ($i=1,2,\dots ,m$) by Eq. (45). Alternatives are ranked based on the descending orders of $\phi ({A_{i}})$ ($i=1,2,\dots ,m$).

6 A Case Study

In this section, an example of a car sharing platform selection is provided to illustrate the application of the proposed method. Furthermore, the comparative analyses are performed to show the merits of the proposed method.

6.1 Car Sharing Platform Selection

With the rapid development of internet technology and the deep advocation of green travel, the car sharing has sprung up over the last three years. Up to now, several car sharing platforms have emerged in China, such as Evcard, Gofun, Togo and so on. The popularization of car sharing greatly facilitates peoples’ travel and relieves the traffic pressure.
As a famous tourist city in China, Guilin cannot satisfy the travel of tourists due to the limited operational capacity of public transportation. So it is necessary for the government to introduce a car sharing platform to resolve the traffic problem. Now, the government invites three DMs to select the best car sharing platform from four candidate platforms (alternatives), including Evcard, Gofun, Togo and Greengo. Four attributes are considered, including safety (${u_{1}}$), convenience (${u_{2}}$), service (${u_{3}}$) and car brand (${u_{4}}$). Suppose a linguistic term set $S=\{{s_{-4}},{s_{-3}},{s_{-2}},{s_{-1}},{s_{0}},{s_{1}},{s_{2}},{s_{3}},{s_{4}}\}$ is provided in Table 3. Three invited DMs, ${d_{1}}$, ${d_{2}}$ and ${d_{3}}$, utilize PLTSs to evaluate four alternatives with respect to four attributes, and construct three probabilistic linguistic decision matrices ${\boldsymbol{U}_{1}}$, ${\boldsymbol{U}_{2}}$ and ${\boldsymbol{U}_{3}}$.
Step 1. Each DM establishes decision matrices ${\boldsymbol{U}_{1}}$, ${\boldsymbol{U}_{2}}$ and ${\boldsymbol{U}_{3}}$ as shown in Table 4, and the corresponding normalized ordered matrices ${\bar{\boldsymbol{U}}_{1}}$, ${\bar{\boldsymbol{U}}_{2}}$ and ${\bar{\boldsymbol{U}}_{3}}$ are listed in Table 5.
Table 3
Linguistic variables corresponding to linguistic terms.
Linguistic variables Linguistic terms Linguistic variables Linguistic terms
Very bad ${s_{-4}}$ Slightly good ${s_{1}}$
Bad ${s_{-3}}$ A little good ${s_{2}}$
A little bad ${s_{-2}}$ Good ${s_{3}}$
Slightly bad ${s_{-1}}$ Very good ${s_{4}}$
Medium ${s_{0}}$
Table 4
Probabilistic linguistic decision matrices ${\boldsymbol{U}_{1}}$, ${\boldsymbol{U}_{2}}$ and ${\boldsymbol{U}_{3}}$.
${u_{1}}$ ${u_{2}}$ ${u_{3}}$ ${u_{4}}$
${d_{1}}$ ${A_{1}}$ $\{{s_{-2}}(0.4),{s_{1}}(0.5)\}$ $\{{s_{2}}(0.6),{s_{4}}(0.4)\}$ $\{{s_{0}}(1)\}$ $\{{s_{-2}}(0.4),{s_{-1}}(0.6)\}$
${A_{2}}$ $\{{s_{4}}(1)\}$ $\{{s_{2}}(0.4),{s_{4}}(0.5)\}$ $\{{s_{0}}(0.3),{s_{1}}(0.3),{s_{2}}(0.4)\}$ $\{{s_{2}}(0.3),{s_{3}}(0.7)\}$
${A_{3}}$ $\{{s_{1}}(0.3),{s_{2}}(0.7)\}$ $\{{s_{1}}(0.4),{s_{2}}(0.3),{s_{3}}(0.3)\}$ $\{{s_{2}}(1)\}$ $\{{s_{-1}}(0.8),{s_{1}}(0.2)\}$
${A_{4}}$ $\{{s_{-1}}(1)\}$ $\{{s_{1}}(0.6),{s_{2}}(0.4)\}$ $\{{s_{0}}(0.5),{s_{2}}(0.5)\}$ $\{{s_{3}}(1)\}$
${d_{2}}$ ${A_{1}}$ $\{{s_{-1}}(0.3),{s_{1}}(0.7)\}$ $\{{s_{2}}(0.4),{s_{3}}(0.2),{s_{4}}(0.4)\}$ $\{{s_{0}}(0.7),{s_{1}}(0.3)\}$ $\{{s_{-1}}(1)\}$
${A_{2}}$ $\{{s_{3}}(0.4),{s_{4}}(0.5)\}$ $\{{s_{3}}(1)\}$ $\{{s_{3}}(0.5),{s_{4}}(0.5)\}$ $\{{s_{0}}(0.4),{s_{2}}(0.6)\}$
${A_{3}}$ $\{{s_{0}}(0.7),{s_{1}}(0.3)\}$ $\{{s_{1}}(0.6),{s_{2}}(0.3)\}$ $\{{s_{1}}(0.3),{s_{2}}(0.7)\}$ $\{{s_{-1}}(1)\}$
${A_{4}}$ $\{{s_{-2}}(1)\}$ $\{{s_{1}}(0.5),{s_{2}}(0.4)\}$ $\{{s_{-1}}(1)\}$ $\{{s_{1}}(0.1),{s_{2}}(0.2),{s_{3}}(0.7)\}$
${d_{3}}$ ${A_{1}}$ $\{{s_{0}}(0.4),{s_{1}}(0.6)\}$ $\{{s_{3}}(0.6),{s_{4}}(0.4)\}$ $\{{s_{-1}}(0.2),{s_{0}}(0.8)\}$ $\{{s_{-1}}(1)\}$
${A_{2}}$ $\{{s_{3}}(0.4),{s_{4}}(0.6)\}$ $\{{s_{1}}(0.3),{s_{2}}(0.7)\}$ $\{{s_{3}}(0.6),{s_{4}}(0.4)\}$ $\{{s_{2}}(1)\}$
${A_{3}}$ $\{{s_{0}}(0.6),{s_{1}}(0.4)\}$ $\{{s_{1}}(1)\}$ $\{{s_{-1}}(0.2),{s_{2}}(0.8)\}$ $\{{s_{-1}}(0.7),{s_{1}}(0.3)\}$
${A_{4}}$ $\{{s_{-4}}(0.4),{s_{-2}}(0.4)\}$ $\{{s_{1}}(0.5),{s_{2}}(0.5)\}$ $\{{s_{-1}}(0.8),{s_{0}}(0.2)\}$ $\{{s_{2}}(1)\}$
Step 2. Calculate the total entropy and the symmetric cross entropy.
Table 5
Normalized ordered decision matrices ${\bar{\boldsymbol{U}}_{1}}$, ${\bar{\boldsymbol{U}}_{2}}$ and ${\bar{\boldsymbol{U}}_{3}}$.
${u_{1}}$ ${u_{2}}$ ${u_{3}}$ ${u_{4}}$
${d_{1}}$ ${A_{1}}$ $\{{s_{-2}}(4/9),{s_{-2}}(0),{s_{1}}(5/9)\}$ $\{{s_{2}}(0),{s_{2}}(0.6),{s_{4}}(0.4)\}$ $\{{s_{0}}(0),{s_{0}}(0),{s_{0}}(1)\}$ $\{{s_{-2}}(0.4),{s_{-1}}(0.6),{s_{-2}}(0)\}$
${A_{2}}$ $\{{s_{4}}(0),{s_{4}}(0),{s_{4}}(1)\}$ $\{{s_{2}}(0),{s_{2}}(4/9),{s_{4}}(5/9)\}$ $\{{s_{0}}(0.3),{s_{1}}(0.3),{s_{2}}(0.4)\}$ $\{{s_{2}}(0),{s_{2}}(0.3),{s_{3}}(0.7)\}$
${A_{3}}$ $\{{s_{1}}(0),{s_{1}}(0.3),{s_{2}}(0.7)\}$ $\{{s_{1}}(0.4),{s_{2}}(0.3),{s_{3}}(0.3)\}$ $\{{s_{2}}(0),{s_{2}}(0),{s_{2}}(1)\}$ $\{{s_{-1}}(0.8),{s_{-1}}(0),{s_{1}}(0.2)\}$
${A_{4}}$ $\{{s_{-1}}(1),{s_{-1}}(0),{s_{-1}}(0)\}$ $\{{s_{1}}(0),{s_{1}}(0.6),{s_{2}}(0.4)\}$ $\{{s_{0}}(0),{s_{0}}(0.5),{s_{2}}(0.5)\}$ $\{{s_{3}}(0),{s_{3}}(0),{s_{3}}(1)\}$
${d_{2}}$ ${A_{1}}$ $\{{s_{-1}}(0.3),{s_{-1}}(0),{s_{1}}(0.7)\}$ $\{{s_{3}}(0.2),{s_{2}}(0.4),{s_{4}}(0.4)\}$ $\{{s_{0}}(0),{s_{0}}(0.7),{s_{1}}(0.3)\}$ $\{{s_{-1}}(1),{s_{-1}}(0),{s_{-1}}(0)\}$
${A_{2}}$ $\{{s_{3}}(0.),{s_{3}}(4/9),{s_{4}}(5/9)\}$ $\{{s_{3}}(0),{s_{3}}(0),{s_{3}}(1)\}$ $\{{s_{3}}(0),{s_{3}}(0.5),{s_{4}}(0.5)\}$ $\{{s_{0}}(0),{s_{0}}(0.4),{s_{2}}(0.6)\}$
${A_{3}}$ $\{{s_{0}}(0),{s_{0}}(0.7),{s_{1}}(0.3)\}$ $\{{s_{1}}(0),{s_{1}}(6/9),{s_{2}}(3/9)\}$ $\{{s_{1}}(0),{s_{1}}(0.3),{s_{2}}(0.7)\}$ $\{{s_{-1}}(1),{s_{-1}}(0),{s_{-1}}(0)\}$
${A_{4}}$ $\{{s_{-2}}(1),{s_{-2}}(0),{s_{-2}}(0)\}$ $\{{s_{1}}(0),{s_{1}}(5/9),{s_{2}}(4/9)\}$ $\{{s_{-1}}(1),{s_{-1}}(0),{s_{-1}}(0)\}$ $\{{s_{1}}(0.1),{s_{2}}(0.2),{s_{3}}(0.7)\}$
${d_{3}}$ ${A_{1}}$ $\{{s_{0}}(0),{s_{0}}(0.4),{s_{1}}(0.6)\}$ $\{{s_{3}}(0),{s_{3}}(0.6),{s_{4}}(0.4)\}$ $\{{s_{-1}}(0.2),{s_{-1}}(0),{s_{0}}(0.8)\}$ $\{{s_{-1}}(1),{s_{-1}}(0),{s_{-1}}(0)\}$
${A_{2}}$ $\{{s_{3}}(0),{s_{3}}(0.4),{s_{4}}(0.6)\}$ $\{{s_{1}}(0),{s_{1}}(0.3),{s_{2}}(0.7)\}$ $\{{s_{3}}(0),{s_{4}}(0.4),{s_{3}}(0.6)\}$ $\{{s_{0}}(0),{s_{0}}(0.1),{s_{1}}(0.9)\}$
${A_{3}}$ $\{{s_{0}}(0),{s_{0}}(0.6),{s_{1}}(0.4)\}$ $\{{s_{1}}(0),{s_{1}}(0),{s_{1}}(1)\}$ $\{{s_{-1}}(0.2),{s_{-1}}(0),{s_{2}}(0.8)\}$ $\{{s_{-1}}(0.7),{s_{-1}}(0),{s_{1}}(0.3)\}$
${A_{4}}$ $\{{s_{-4}}(0.5),{s_{-2}}(0.5),{s_{-4}}(0)\}$ $\{{s_{1}}(0),{s_{1}}(0.5),{s_{2}}(0.5)\}$ $\{{s_{-1}}(0.8),{s_{-1}}(0),{s_{0}}(0.2)\}$ $\{{s_{2}}(0),{s_{2}}(0),{s_{2}}(1)\}$
By Eqs. (18), (19), (21) and (22), the total entropies of matrices ${\bar{\boldsymbol{U}}_{1}}$, ${\bar{\boldsymbol{U}}_{2}}$ and ${\bar{\boldsymbol{U}}_{3}}$ are respectively calculated as ${E_{T}}({\bar{\boldsymbol{U}}_{1}})=7.4022$, ${E_{T}}({\bar{\boldsymbol{U}}_{2}})=7.4223$, ${E_{T}}({\bar{\boldsymbol{U}}_{2}})=7.7322$. Employing Eqs. (24)–(26), the symmetric cross entropies are obtained and represented in Table 6.
Step 3. Determine the weight vector of DMs.
Table 6
The symmetric cross entropies between ${\bar{\boldsymbol{U}}_{1}}$, ${\bar{\boldsymbol{U}}_{2}}$ and ${\bar{\boldsymbol{U}}_{3}}$.
${\bar{\boldsymbol{U}}_{i}}$ ${\bar{\boldsymbol{U}}_{j}}$ $D({\bar{\boldsymbol{U}}_{k}}$, ${\bar{\boldsymbol{U}}_{\delta }})$ ${\bar{\boldsymbol{U}}_{1}}$ ${\bar{\boldsymbol{U}}_{2}}$ ${\bar{\boldsymbol{U}}_{3}}$
${\bar{\boldsymbol{U}}_{1}}$ – 0.2804 0.3965
${\bar{\boldsymbol{U}}_{2}}$ 0.2804 – 0.084
${\bar{\boldsymbol{U}}_{3}}$ 0.3965 0.084 –
Suppose $\beta =0.15$, and the weight vector of DMs is derived by Eqs. (29), (32) and (33) as $\boldsymbol{\lambda }={(0.4273,0.2529,0.3198)^{\mathrm{T}}}$.
Step 4. Aggregate individual decision matrices into a collective one.
By employing the PLWAM operator (see Eq. (4)), individual normalized ordered matrices ${\bar{\boldsymbol{U}}_{k}}$ ($k=1,2,3$) are aggregated into a collective one $\boldsymbol{U}={({L_{ij}}(p))_{m\times n}}$, which is converted into an normalized ordered matrix $\bar{\boldsymbol{U}}={({\bar{L}_{ij}}(p))_{m\times n}}$, please see Table 7.
Steps 5. Calculate the total entropy and symmetric cross entropy of alternatives.
Table 7
Collective normalized ordered matrix $\bar{\boldsymbol{U}}$.
${u_{1}}$ ${u_{2}}$ ${u_{3}}$ ${u_{4}}$
${d_{1}}$ ${A_{1}}$ $\begin{array}[t]{l}\{{s_{-1.03}}(0.05),{s_{-0.59}}(0.08),\\ {} {s_{-0.42}}(0.12),{s_{-0.03}}(0.19),\\ {} {s_{0.26}}(0.07),{s_{0.58}}(0.10),\\ {} {s_{0.71}}(0.16),{s_{1}}(0.23)\}\end{array}$ $\begin{array}[t]{l}\{{s_{2.40}}(0.14),{s_{2.66}}(0.07),\\ {} {s_{4}}(0.78)\}\end{array}$ $\begin{array}[t]{l}\{{s_{-0.30}}(0.14),{s_{0}}(0.56),\\ {} {s_{0.01}}(0.06),{s_{0.28}}(0.24)\}\end{array}$ $\{{s_{-1.40}}(0.4),{s_{-1}}(0.6)\}$
${A_{2}}$ $\{{s_{4}}(1)\}$ $\begin{array}[t]{l}\{{s_{2.10}}(0.13),{s_{2.32}}(0.31),\\ {} {s_{4}}(0.56)\}\end{array}$ $\begin{array}[t]{l}\{{s_{2.19}}(0.09),{s_{2.41}}(0.09),\\ {} {s_{2.66}}(0.12),{s_{4}}(0.7)\}\end{array}$ $\begin{array}[t]{l}\{{s_{1.02}}(0.01),{s_{1.29}}(0.11),\\ {} {s_{1.50}}(0.00),{s_{1.72}}(0.16),\\ {} {s_{1.78}}(0.03),{s_{1.98}}(0.25),\\ {} {s_{2.14}}(0.04),{s_{2.31}}(0.38)\}\end{array}$
${A_{3}}$ $\begin{array}[t]{l}\{{s_{0.46}}(0.13),{s_{0.71}}(0.05),\\ {} {s_{0.77}}(0.08),{s_{1}}(0.04),\\ {} {s_{1.02}}(0.29),{s_{1.23}}(0.13),\\ {} {s_{1.29}}(0.20),{s_{1.48}}(0.08)\}\end{array}$ $\begin{array}[t]{l}\{{s_{1}}(0.27),{s_{1.29}}(1.33),\\ {} {s_{1.47}}(0.20),{s_{1.72}}(0.10),\\ {} {s_{2.10}}(0.20),{s_{2.30}}(0.10)\}\end{array}$ $\begin{array}[t]{l}\{{s_{1.03}}(0.06),{s_{1.32}}(0.14),\\ {} {s_{1.78}}(0.24),{s_{2.00}}(0.56)\}\end{array}$ $\begin{array}[t]{l}\{{s_{-1}}(0.56),{s_{-0.25}}(0.24),\\ {} {s_{-0.02}}(0.14),{s_{0.59}}(0.06)\}\end{array}$
${A_{4}}$ $\{{s_{-2.1}}(0.5),{s_{-1.56}}(0.5)\}$ $\begin{array}[t]{l}\{{s_{1}}(0.17),{s_{1.30}}(0.13),\\ {} {s_{1.37}}(0.17),{s_{1.47}}(0.11),\\ {} {s_{1.63}}(0.13),{s_{1.72}}(0.09),\\ {} {s_{1.78}}(0.11),{s_{2.00}}(0.09)\}\end{array}$ $\begin{array}[t]{l}\{{s_{-0.55}}(0.40),{s_{-0.23}}(0.10),\\ {} {s_{0.61}}(0.40),{s_{0.85}}(0.10)\}\end{array}$ $\begin{array}[t]{l}\{{s_{2.35}}(0.1),{s_{2.51}}(0.2),\\ {} {s_{2.75}}(0.7)\}\end{array}$
Take $\theta =0.5$, and the total entropy matrix $\boldsymbol{E}={E_{T}}({\bar{L}_{ij}}(p))$ is obtained by Eqs. (18), (19) and (21) as
\[ \boldsymbol{E}=\left(\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c}0.5878\hspace{1em}& 0.4990\hspace{1em}& 0.5225\hspace{1em}& 0.5084\\ {} 0.4170\hspace{1em}& 0.5538\hspace{1em}& 0.5246\hspace{1em}& 0.5386\\ {} 0.5311\hspace{1em}& 0.5343\hspace{1em}& 0.5369\hspace{1em}& 0.5561\\ {} 0.4780\hspace{1em}& 0.5154\hspace{1em}& 0.5717\hspace{1em}& 0.4783\end{array}\right).\]
Employing Eqs. (24) and (25), the symmetric cross entropy matrix, denoted by $\boldsymbol{D}={({D_{ij}})_{4\times 4}}$, where ${D_{ij}}={\textstyle\sum _{r=1,r\ne i}^{4}}D({\bar{L}_{ij}}(p),{\bar{L}_{rj}}(p))$ ($j=1,2,3,4$), is derived as
\[ \boldsymbol{D}=\left(\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c}0.3386\hspace{1em}& 1.1016\hspace{1em}& 0.2956\hspace{1em}& 0.5730\\ {} 1.1016\hspace{1em}& 0.1767\hspace{1em}& 0.4484\hspace{1em}& 0.2310\\ {} 0.2956\hspace{1em}& 0.4484\hspace{1em}& 0.1779\hspace{1em}& 0.2662\\ {} 0.5730\hspace{1em}& 0.2310\hspace{1em}& 0.2662\hspace{1em}& 0.4789\end{array}\right).\]
Step 6. Determine the weight vector of attributes.
The weight vector of attributes is determined by Eq. (38) as
\[ \boldsymbol{w}={(0.2932,0.2632,0.2070,0.2367)^{\mathrm{T}}}.\]
Step 7. Compute the preference function values.
Using Eqs. (40) and (41), preference function values are computed. In view of limited space, the concrete preference function values are not listed.
Step 8. Obtain the integrated preference index.
By Eq. (42), the total preference index matrix is derived as
\[ \boldsymbol{\pi }=\left(\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c}-\hspace{1em}& 0.2232\hspace{1em}& 0.5000\hspace{1em}& 0.2239\\ {} 0.4323\hspace{1em}& -\hspace{1em}& 0.4422\hspace{1em}& 0.4234\\ {} 0.1541\hspace{1em}& 0.1533\hspace{1em}& -\hspace{1em}& 0.1472\\ {} 0.1441\hspace{1em}& 0.2218\hspace{1em}& 0.2001\hspace{1em}& -\end{array}\right).\]
Step 9. Calculate positive and negative flows of alternatives.
In virtue of Eqs. (43) and (44), positive and negative flows of alternatives are respectively calculated as
\[\begin{aligned}{}& {\phi ^{+}}({A_{1}})=0.3157,\hspace{1em}{\phi ^{+}}({A_{2}})=0.4326,\hspace{1em}{\phi ^{+}}({A_{3}})=0.1515,\hspace{1em}{\phi ^{+}}({A_{4}})=0.1887.\\ {} & {\phi ^{-}}({A_{1}})=0.2435,\hspace{1em}{\phi ^{-}}({A_{2}})=0.1994,\hspace{1em}{\phi ^{-}}({A_{3}})=0.3808,\hspace{1em}{\phi ^{-}}({A_{4}})=0.2648.\end{aligned}\]
Step 10. Rank alternatives.
Net flows of alternatives are obtained by Eq. (45) as
\[ \phi ({A_{1}})=0.0722,\hspace{1em}\phi ({A_{2}})=0.2332,\hspace{1em}\phi ({A_{3}})=-0.2292,\hspace{1em}\phi ({A_{4}})=-0.0762.\]
As $\phi ({A_{2}})>\phi ({A_{1}})>\phi ({A_{4}})>\phi ({A_{3}})$, alternatives are ranked as ${A_{2}}\succ {A_{1}}\succ {A_{4}}\succ {A_{3}}$. Therefore, the alternative ${A_{2}}$ (Gofun platform) is the best one.

6.2 Comparative Analyses

To show the advantages of the proposed method, comparative analyses with Mao’s method (Mao et al., 2019), PL-PROMETHEE method (Xu et al., 2019) and other existing decision methods are performed in the sequel.

6.2.1 Comparison with Mao’s method

Mao et al. (2019) presented a new method integrating ELECTRE and TOPSIS to solve MAGDM problems with PLTSs. Now we use Mao’s method (Mao et al., 2019) to solve the above problem. Set $\gamma =1$ and $\lambda =1$. DMs’ subjective weight vector is assigned as ${\boldsymbol{\lambda }_{1}}={(0.04,0.88,0.08)^{\mathrm{T}}}$. Thus, the corresponding ordering results are derived as ${A_{1}}\succ {A_{2}}\succ {A_{4}}\succ {A_{3}}$. Clearly, these results are not consistent with those obtained by the proposed method, which can be explained as follows:
(1) The proposed method determines DMs’ weights objectively by the total entropy and cross entropy of decision information. However, method (Mao et al., 2019) assigned DMs’ weights subjectively, by which the random cannot be avoided. Although an adjusted coefficient obtained from the consistency degree between DMs is used to adjust the subjective weights of DMs, the adjusted DMs’ weights may be irrational because the consistency degree ${\rho _{ir}}$ indicating the consistency between DM ${d_{i}}$ and ${d_{r}}$ (i.e. Eq. (40) in method Mao et al., 2019) does not satisfy the symmetry. For example, the consistency degree ${\rho _{12}}=0.5125$, while ${\rho _{21}}=0.4799$, So ${\rho _{21}}\ne {\rho _{12}}$. In fact, the consistency degree should be symmetrical theoretically. Hence, DMs’ weights derived by the proposed method are more reasonable.
(2) The aggregated value obtained by the proposed PLWAM operator is more convincing than that obtained by $GPLHW{A_{w}}$ operator in Mao et al. (2019). The former is a PLTS based on Theorem 1. Conversely, the latter cannot always satisfy this property. For example, while solving example in Section 6.1, ${L_{11}}(p)={\text{GPLHWA}_{w}}({\bar{L}_{11}^{1}}(p),{\bar{L}_{11}^{2}}(p),{\bar{L}_{11}^{3}}(p))=\{{g^{-1}}(0.6937)(0.442),{g^{-1}}(0.7449)(0.3),{g^{-1}}(1)(0.7691),{g^{-1}}(0.7521)(0.3750),{g^{-1}}(0.2191)\}$. Clearly, one has $0.442+0.3+0.7691+0.7521+0.2191=2.375>1$. Hence, the latter sometimes is not a PLTS in a strict sense. Thus, the result derived by the proposed method is more convincing.

6.2.2 Compared with the PL-PROMETHEE Method

The PL-PROMETHEE method (Xu et al., 2019) is applied to solve the problem in Section 6.1. Suppose attribute weight vector as $\boldsymbol{w}\hspace{2.5pt}={(0.3,0.2,0.3,0.2)^{\mathrm{T}}}$. The Borda’s scores of alternatives for each DM are shown as Table 8.
Table 8
The Borda’s scores of alternatives for each alternatives.
${d_{1}}$ ${d_{2}}$ ${d_{3}}$ Borda’s scores
${A_{1}}$ 1 4 3 8
${A_{2}}$ 4 2 1 7
${A_{3}}$ 3 3 2 8
${A_{4}}$ 2 1 4 7
It finds from Table 8 that the ranking result is ${A_{1}}\sim {A_{3}}\succ {A_{2}}\sim {A_{4}}$, which is different from the one derived by the proposed method, i.e, ${A_{2}}\succ {A_{1}}\succ {A_{4}}\succ {A_{3}}$. The primary reasons may result from the following two aspects.
(1) PL-PROMETHEE method ignored the determination of attribute weights and DMs’ weights, but assigned them subjectively. Thus, the arbitrariness cannot be avoided. By contrast, the proposed method derives attribute weights and DMs’ weights objectively based on the total entropy and symmetric cross entropy of decision information. Therefore, the subjectivity is effectively reduced and the decision results are more credible.
(2) In the preference function of PL-PROMETHEE method (Xu et al., 2019), the preference value is taken as 1 if the probability of one alternative preferred to the other with respect to an attribute is bigger than 0.5. Otherwise, the preference value is taken as 0. Therefore, the distinguishing power of this preference function is not STRONG enough, which may be the reason why alternatives ${A_{1}}$ and ${A_{3}}$ cannot be discriminated. However, in the proposed method, the preference function is neatly defined based on the deviations of attribute values and described by PLTSs, please see Eqs. (40) and (41). Hence, the proposed method has a stronger distinguishing power.

6.2.3 Compared with Other Existing Decision Making Methods

To further demonstrate the superiority of the proposed method, this subsection conducts a theoretical analysis and a practical analysis with other existing methods (Pang et al., 2016; Gou et al., 2017; Liu and Li, 2018; Mao et al., 2019; Liu and Li, 2019; Peng et al., 2020).
(1) Theoretical analysis
(i) The proposed method introduces two new probabilistic linguistic weighted average operators (i.e. PLWAM operator and PLWAG operator). Compared with existing operators mentioned in Pang et al. (2016) and Gou et al. (2017), the proposed operators have some advantages, such as the closure of operations and operation values being consistent with intuition (please see Remark 2). Therefore, the aggregated information obtained by the proposed operators is more reliable, and thus the decision results based on such aggregated information are more reasonable.
(ii) In the proposed method, two cases including the attribute weights with unknown or partially known values are both taken into account. To determine attribute weights, the proposed method builds two different bi-objective programming models by maximizing the cross entropy and minimizing the total entropy of the collective evaluation values. As analysed in Section 5.3, this model considers the quality of the collective evaluations and the deviations between evaluations. However, in the extended MULTIMOORA method (Liu and Li, 2019), the attribute weights were given in advance, which may be not able to avoid the subjective arbitrariness in decision process. Although the TOPSIS method (Pang et al., 2016) derived attribute weights objectively by the maximum deviation approach, it ignored the quality of decision information. Thus, the creditability of attribute weights derived by the proposed method is higher.
(iii) Although method (Peng et al., 2020) and the proposed method can solve decision making problems with PLTSs, the former failed to handle group decision making problems while the latter can manage both single decision making problems and the group ones. Thereby, the latter has wider application fields.
(2) Practical analysis
The distinguishing power of the proposed method is stronger than that of method (Xu et al., 2019), which can be verified by Section 6.2. This is largely due to the fact that the preference functions of the improved PL-PROMETHEE approach proposed in this paper are in the form of PLTSs and can distinguish alternatives neatly, whereas the preference functions defined in method (Xu et al., 2019) and the other PL-PROMETHEE approach (Liu and Li, 2018) are all crisp numbers.
(2) The stableness of the proposed method is better than method (Liu and Li, 2019). The former determines DMs’ weights and attribute weights by building objective programming model. Thereby, the decision result is unique under the given decision information. In contrast, the latter assigned DMs’ weights or attribute weights in advance. Hence, the change of such weights may result in different decision results.
(3) Method Gou et al. (2017) is suitable for the environment of hesitant fuzzy linguistic variables. Nevertheless, it cannot handle MAGDM problems with PLTSs which can be solved by the proposed method.

7 Conclusions

In today’s internet age, car sharing is more and more popular. The car sharing platform selection is important for tourists, which can be regarded as a MAGDM problem. The PLTS is a powerful tool to represent the evaluation information of DMs in complex MAGDM problems. This paper introduces PLWAM and PLWGM operators firstly and studies some desirable properties of them. Subsequently, a hesitancy index of a PLTS and a general distance measure between PLTSs are defined. Then, a new approach is proposed to rank PLTSs. To measure the fuzziness and hesitancy of a PLTS, a fuzzy entropy and a hesitancy entropy of PLTSs are presented. Afterwards, a total entropy of PLTSs is defined to measure the uncertainty of a PLTS. Meanwhile, a cross entropy between PLTSs is presented. Based on the total entropy and the cross entropy of PLTSs, DMs’ weights and attribute weights are determined objectively. Finally, an improved PL-PROMETHEE method is developed by defining new preference functions and a total preference index. A car sharing platform selection is operated at length to illustrate the applications and advantages of the proposed method.
Apart from solving the selection of car sharing platform, the proposed method can be applied into many decision making fields, such as financial management (Kou 2019a, 2019b) and supplier selection. This paper ignores the risk attitudes of DMs which may play an important role in some finance decision problems. Future study will investigate MAGDM problems with PLTSs considering DMs’ risk attitudes.

A Appendix

Theorem 1 can be proved by mathematical induction on n as follows:
For $n=1$, Theorem 1 obviously holds based on (ii) in Definition 9.
Suppose Theorem 1 holds for $n=q$, which means that:
(A.1)
\[\begin{aligned}{}& {\text{PLWA}_{X}}\big({L_{1}}(p),{L_{2}}(p),\dots ,{L_{q}}(p)\big)\\ {} & \hspace{1em}={g^{-1}}\Bigg(\bigcup \limits_{\genfrac{}{}{0pt}{}{{k_{j}}=1,2,\dots ,\mathrm{\# }{L_{j}}(p)}{j=1,2,\dots ,q}}\Bigg\{\Bigg(1-{\prod \limits_{j=1}^{n}}{\big(1-g\big({L_{j}^{({k_{j}})}}\big)\big)^{{\omega _{j}}}}\Bigg)\big({p_{1}^{({k_{1}})}}{p_{2}^{({k_{2}})}}\dots {p_{n}^{({k_{q}})}}\big)\Bigg\}\Bigg).\end{aligned}\]
Furthermore, one gets
(A.2)
\[ \sum \limits_{\genfrac{}{}{0pt}{}{{k_{j}}=1,2,\dots \mathrm{\# }{L_{j}}(p)}{j=1,2,\dots n}}{p_{1}^{({k_{1}})}}{p_{2}^{({k_{2}})}}\dots {p_{n}^{({k_{n}})}}=1.\]
When $n=q+1$, one has
(A.3)
\[\begin{aligned}{}& {\text{PLWA}_{X}}\big({L_{1}}(p),{L_{2}}(p),\dots ,{L_{q}}(p),{L_{q+1}}(p)\big)\\ {} & \hspace{1em}=\big({\omega _{1}}{L_{1}}(p)\oplus {\omega _{2}}{L_{2}}(p)\oplus \dots \oplus {\omega _{q}}{L_{q}}(p)\big)\oplus {\omega _{q+1}}{L_{q+1}}(p).\end{aligned}\]
According to Eq. (A.1) and the operational laws (i.e. (i) and (ii)) in Definition 9, it generates
\[\begin{aligned}{}& {\text{PLWA}_{X}}\big({L_{1}}(p),{L_{2}}(p),\dots ,{L_{q}}(p),{L_{q+1}}(p)\big)\\ {} & \hspace{1em}={g^{-1}}\Bigg(\bigcup \limits_{\genfrac{}{}{0pt}{}{{k_{j}}=1,2,\dots ,\mathrm{\# }{L_{j}}(p)}{j=1,2,\dots ,q}}\Bigg\{\Bigg(1-{\prod \limits_{j=1}^{n}}{\big(1-g\big({L_{j}^{({k_{j}})}}\big)\big)^{{\omega _{j}}}}\Bigg)\big({p_{1}^{({k_{1}})}}{p_{2}^{({k_{2}})}}\dots {p_{q}^{({k_{q}})}}\big)\Bigg\}\Bigg)\\ {} & \hspace{2em}+{g^{-1}}(\bigcup \limits_{{k_{q+1}}=1,2,\dots ,\mathrm{\# }{L_{q+1}}(p)}\big\{\big(1-\big(1-g{\big({L_{j}^{({k_{j}})}}\big)^{{\omega _{q+1}}}}\big)\big({p_{q+1}^{({k_{q+1}})}}\big)\big\}\big)\\ {} & \hspace{1em}={g^{-1}}(\bigcup \limits_{\genfrac{}{}{0pt}{}{{k_{j}}=1,2,\dots ,\mathrm{\# }{L_{j}}(p)}{\genfrac{}{}{0pt}{}{j=1,2,\dots ,q}{{k_{q+1}}=1,2,\dots ,\mathrm{\# }{L_{q+1}}(p)}}}\Bigg\{(\Bigg(1-{\prod \limits_{j=1}^{q}}{\big(1-g\big({L_{j}^{({k_{j}})}}\big)\big)^{{\omega _{j}}}}\Bigg)\\ {} & \hspace{2em}+(1-\big(1-g{\big({L_{q+1}^{({k_{q+1}})}}\big)^{{\omega _{q+1}}}}\big)-\Bigg(1-{\prod \limits_{j=1}^{q}}{\big(1-g\big({L_{j}^{({k_{j}})}}\big)\big)^{{\omega _{j}}}}\\ {} & \hspace{2em}\times \big(1-\big(1-g{\big({L_{q+1}^{({k_{q+1}})}}\big)^{{\omega _{q+1}}}}\big)\big)\Bigg)\big({p_{1}^{({k_{1}})}}{p_{2}^{({k_{2}})}}\dots {p_{q}^{({k_{q}})}}{p_{q+1}^{({k_{q+1}})}}\big)\Bigg\}\\ {} & \hspace{1em}={g^{-1}}\Bigg(\bigcup \limits_{\genfrac{}{}{0pt}{}{{k_{j}}=1,2,\dots ,\mathrm{\# }{L_{j}}(p)}{j=1,2,\dots ,q+1}}\Bigg\{\Bigg(1-{\prod \limits_{j=1}^{q+1}}{\big(1-g\big({L_{j}^{({k_{j}})}}\big)\big)^{{\omega _{j}}}}\Bigg)\big({p_{1}^{({k_{1}})}}{p_{2}^{({k_{2}})}}\dots {p_{q+1}^{({k_{q+1}})}}\big)\Bigg\}\Bigg).\end{aligned}\]
Thus, Eq. (4) holds for $n=q+1$. Therefore, Eq. (5) holds for all n.
On the other hand, consider
\[\begin{aligned}{}& \sum \limits_{\genfrac{}{}{0pt}{}{{k_{j}}=1,2,\dots \mathrm{\# }{L_{j}}(p)}{j=1,2,\dots q,q+1}}{p_{1}^{({k_{1}})}}{p_{2}^{({k_{2}})}}\dots {p_{q}^{({k_{q}})}}{p_{q+1}^{({k_{q}})}}\\ {} & \hspace{1em}=\sum \limits_{\genfrac{}{}{0pt}{}{{k_{j}}=1,2,\dots \mathrm{\# }{L_{j}}(p)}{j=1,2,\dots q}}{p_{1}^{({k_{1}})}}{p_{2}^{({k_{2}})}}\dots {p_{q}^{({k_{q}})}}\big({p_{q+1}^{(1)}}+{p_{q+1}^{(2)}}+\cdots +{p_{q+1}^{(\mathrm{\# }{L_{q+1}}(p)}}\big).\end{aligned}\]
As ${p_{q+1}^{(1)}}+{p_{q+1}^{(2)}}+\cdots +{p_{q+1}^{(\mathrm{\# }{L_{q+1}}(p)}}=1$, according to Eq. (A.2), one has
\[ \sum \limits_{\genfrac{}{}{0pt}{}{{k_{j}}=1,2,\dots \mathrm{\# }{L_{j}}(p)}{j=1,2,\dots q,q+1}}{p_{1}^{({k_{1}})}}{p_{2}^{({k_{2}})}}\dots {p_{q}^{({k_{q}})}}{p_{q+1}^{({k_{q}})}}=\sum \limits_{\genfrac{}{}{0pt}{}{{k_{j}}=1,2,\dots \mathrm{\# }{L_{j}}(p)}{j=1,2,\dots q}}{p_{1}^{({k_{1}})}}{p_{2}^{({k_{2}})}}\dots {p_{q}^{({k_{q}})}}=1.\]
Thus, Eq. (5) holds for $n=q+1$. Hence, Eq. (6) holds for all n.

B Appendix

In order to prove that ${E_{F}}(\bar{L}(p))$ is a fuzzy entropy, it is necessary only to prove that ${E_{F}}(\bar{L}(p))$ satisfies the properties in Definition 15.
By the conditions (ii) and (iii) in Theorem 3, the properties (i), (ii) and (iii) obviously hold.
Next, we only prove the property (iv).
As ${(\bar{L}(p))^{(c)}}={g^{-1}}({\textstyle\bigcup _{k=1,2,\dots ,\mathrm{\# }L(p)}}\{(1-g({\bar{L}^{(k)}}))({p^{(k)}})\})$, it can be easily deduced that $e({(\bar{L}(p))^{(c)}})=1-e((\bar{L}(p)))$ Therefore, one obtains
(B.1)
\[ {E_{F}}\big({\big(\bar{L}(p)\big)^{(c)}}\big)=f\big(e\big({\big(\bar{L}(p)\big)^{(c)}}\big)\big)=f\big(1-e\big(\bar{L}(p)\big)\big).\]
Since $0\leqslant e(\bar{L}(p)\leqslant 1$ and $f(1-t)=f(t)$, one has $f(1-e(\bar{L}(p)))=f(e(\bar{L}(p)))$. Hence, in virtue of Eq. (B.1), one gets ${E_{F}}({(\bar{L}(p))^{(c)}})={E_{F}}(\bar{L}(p))$.
The proof is completed.

C Appendix

In order to prove that $h(\bar{L}(p))$ is a hesitancy entropy of $\bar{L}(p)$, it is necessary to prove that $h(\bar{L}(p))$ satisfies three properties in Definition 16.
(i) The sufficiency obviously holds.
Next, we prove the necessity.
When $h(\bar{L}(p))\hspace{2.5pt}=0$, suppose $\mathrm{\# }\bar{L}(p)>1$. In virtue of Eq. (9), we have
(C.1)
\[ \big|g\big({L^{(k)}}\big)-\bar{g}\big|=0\hspace{1em}\text{for any}\hspace{2.5pt}k=1,2,\dots ,\mathrm{\# }\bar{L}(p).\]
Therefore, one has $g({L^{(k)}})=\bar{g}$ ($k=1,2,\dots ,\mathrm{\# }\bar{L}(p)$) and then $\mathrm{\# }\bar{L}(p)=1$, which is contrary with the condition $\mathrm{\# }\bar{L}(p)>1$. Thus, we obtain $\mathrm{\# }\bar{L}(p)=1$. Therefore, it concludes that $\bar{L}(p)=\{{s_{\alpha }}(1)\}$.
(ii) When $\bar{L}(p)=\{{s_{-\tau }}(0.5),{s_{\tau }}(0.5)\}$, from Eq. (9), it is clear that $h(\bar{L}(p))=1$. Furthermore, when $h(\bar{L}(p))=1$, we derive that $2{\textstyle\sum _{k=1}^{\mathrm{\# }\bar{L}(p)}}|g({L^{(k)}})-\bar{g}|{p_{i}}=1$. As $0\leqslant {p^{(k)}},|g({L^{(k)}})-\bar{g}|\leqslant 1$. Hence, we get ${p^{(k)}}={p^{(\delta )}}=0.5$, ${L^{(k)}}={s_{-\tau }}$ and ${L^{(\delta )}}={s_{\tau }}$. Therefore, $\bar{L}(p)=\{{s_{-\tau }}(0.5),{s_{\tau }}(0.5)\}$.
(iii) Suppose $\bar{L}(p)=\{{L^{(1)}}(p),{L^{(2)}}(1-p)\}$, then one obtains $\bar{g}=(g({L^{(1)}})+g({L^{(2)}}))/2$. For convenience, let $g({L^{(1)}})>g({L^{(2)}})$, we have $2{\textstyle\sum _{k=1}^{\mathrm{\# }\bar{L}(p)}}|g({L^{(k)}})-\bar{g}|{p_{i}}=|g({L^{(1)}})-g({L^{(2)}})|$. When ${L^{(1)}}\to {L^{(2)}}$, one gets $|g({L^{(1)}})-g({L^{(2)}})|\to 0$. Therefore, it generates ${E_{H}}({\bar{L}_{1}}(p))\to 0$.
(iv) This is obviously true.
(v) Since ${(L(p))^{(c)}}={g^{-1}}({\textstyle\bigcup _{k=1,2,\dots ,\mathrm{\# }L(p)}}\{(1-g({L^{(k)}}))({p^{(k)}})\})$, it is deduced that
\[\begin{aligned}{}{E_{H}}\big({\big(\bar{L}(p)\big)^{(c)}}\big)& =2{\sum \limits_{k=1}^{\mathrm{\# }\bar{L}(p)}}\big(1-g\big({L^{(k)}}\big)-(1-\bar{g})\big){p_{k}}=2{\sum \limits_{k=1}^{\mathrm{\# }\bar{L}(p)}}\big(g\big({L^{(k)}}\big)-\bar{g}\big){p_{k}}\\ {} & ={E_{H}}\big(\bar{L}(p)\big).\end{aligned}\]
The proof is completed.

D Appendix

In virtue of Eq. (24), one has
\[\begin{aligned}{}& -CE\big({\bar{L}_{1}}(p),{\bar{L}_{2}}(p)\big)\\ {} & \hspace{1em}={\sum \limits_{k=1}^{\mathrm{\# }{\bar{L}_{1}}(p)}}\bigg(-{A_{1}^{(k)}}\ln \frac{2{A_{1}^{(k)}}}{{A_{1}^{(k)}}+{A_{2}^{(k)}}}-\big(1-{A_{1}^{(k)}}\big)\ln \frac{2(1-{A_{1}^{(k)}})}{1-{A_{1}^{(k)}}+1-{A_{2}^{(k)}}}\bigg)\\ {} & \hspace{1em}={\sum \limits_{k=1}^{\mathrm{\# }{\bar{L}_{1}}(p)}}\bigg({A_{1}^{(k)}}\ln \frac{{A_{1}^{(k)}}+{A_{2}^{(k)}}}{2{A_{1}^{(k)}}}+\big(1-{A_{1}^{(k)}}\big)\ln \frac{1-{A_{1}^{(k)}}+1-{A_{2}^{(k)}}}{2(1-{A_{1}^{(k)}})}\bigg)\end{aligned}\]
As $f(x)=\ln x$ is a concave function, by employing the Jessen inquality, we get
\[\begin{aligned}{}& -\mathit{CE}\big({\bar{L}_{1}}(p),{\bar{L}_{2}}(p)\big)\\ {} & \hspace{1em}\leqslant {\sum \limits_{k=1}^{\mathrm{\# }{\bar{L}_{1}}(p)}}\ln \bigg({A_{1}^{(k)}}\frac{{A_{1}^{(k)}}+{A_{2}^{(k)}}}{2{A_{1}^{(k)}}}+\big(1-{A_{1}^{(k)}}\big)\frac{1-{A_{1}^{(k)}}+1-{A_{2}^{(k)}}}{2(1-{A_{1}^{(k)}})}\bigg)=\ln 1=0.\end{aligned}\]
That is $-\textit{CE}({\bar{L}_{1}}(p),{\bar{L}_{2}}(p))\leqslant 0$. Therefore, $CE({\bar{L}_{1}}(p),{\bar{L}_{2}}(p))\geqslant 0$. The equality holds if and only if ${A_{1}^{(k)}}={A_{2}^{(k)}}(k=1,2,\dots ,\mathrm{\# }{\bar{L}_{1}}(p))$. Namely, $g({\bar{L}_{1}^{(k)}}){\bar{p}_{1}^{(k)}}=g({\bar{L}_{2}^{(k)}}){\bar{p}_{2}^{(k)}}$ $(k=1,2,\dots ,\mathrm{\# }{\bar{L}_{1}}(p))$. Hence, one has $e({\bar{L}_{1}}(p))=e({\bar{L}_{2}}(p))$.
The proof is completed.

Acknowledgements

The authors would like to thank the editor in chief and anonymous reviewers for their insightful and constructive comments and suggestions that have led to an improved version of this paper.

References

 
Bai, C.Z., Zhang, R., Qian, L.X., Wu, Y.N. (2017). Comparisons of probabilistic linguistic term sets for multi-criteria decision making. Knowledge-Based Systems, 119, 284–291.
 
Dong, Y.C., Zha, Q.B., Zhang, H.J., Kou, G., Fujita, H., Chiclana, F., Herrera-Viedma, E. (2018). Consensus reaching in social network group decision making: research paradigms and challenges. Knowledge-Based Systems, 162, 3–13.
 
Feng, X.Q., Zhang, Q., Jin, L.S. (2020). Aggregation of pragmatic operators to support probabilistic linguistic multi-criteria group decision-making problems. Soft Computing, 24(10), 7735–7755.
 
Gou, X.J., Xu, Z.S. (2016). Novel basic operational laws for linguistic terms, hesitant fuzzy linguistic term sets and probabilistic linguistic term sets. Information Sciences, 372, 407–427.
 
Gou, X.J., Xu, Z.S., Liao, H.C. (2017). Hesitant fuzzy linguistic entropy and cross-entropy measures and alternative queuing method for multiple criteria decision making. Information Sciences, 388–389, 225–246.
 
Halouani, N., Chabchoub, H., Martel, J.M. (2009). PROMETHEE-MD-2T method for project selection. European Journal of Operational Research, 195(3), 841–849.
 
Jiang, L.S., Liao, H.C. (2020). Mixed fuzzy least absolute regression analysis with qualitative and probabilistic linguistic information. Fuzzy Sets and Systems, 387, 35–48.
 
Klement, E.P., Mesiar, R., Pap, E. (2000). Triangular Norms. Kluwer Academic Publishers, Dordrecht.
 
Krishankumar, R., Ravichandran, K.S., Saeid, A.B. (2017). A new extension to PROMETHEE under intuitionistic fuzzy environment for solving supplier selection problem with linguistic preferences. Applied Soft Computing, 60, 564–576.
 
Kou, G. (2019a). Introduction to the special issue on FinTech. Financial Innovation, 5(45). https://doi.org/10.1186/s40854-019-0161-1.
 
Kou, G. (2019b). Editor’s introduction. Financial Innovation, 5(46). https://doi.org/10.1186/s40854-019-0160-2.
 
Kou, G., Yang, P., Peng, Y., Xiao, F., Chen, Y., Alsaadi, F.E. (2020). Evaluation of feature selection methods for text classification with small datasets using multiple criteria decision-making methods. Applied Soft Computing, 86. https://doi.org/10.1016/j.asoc.2019.105836.
 
Li, P., Wei, C.P. (2019). An emergency decision-making method based on D-S evidence theory for probabilistic linguistic term sets. International Journal of Disaster Risk Reduction, 37, 101178. https://doi.org/10.1016/j.ijdrr.2019.101178.
 
Li, Y., Zhang, Y.X., Xu, Z.S. (2020). A decision-making model under probabilistic linguistic circumstances with unknown criteria weights for online customer reviews. International Journal of Fuzzy Systems, 22(3), 777–789.
 
Liang, D.C., Kobina, A., Quan, W. (2018). Grey relational analysis method for probabilistic linguistic multi-criteria group decision-making based on geometric Bonferroni Mean. International Journal of fuzzy Systems, 20(7), 2234–2244.
 
Liao, H.C., Jiang, L.S., Lev, B., Fujita, H. (2019). Novel operations of PLTSs based on the disparity degrees of linguistic terms and their use in designing the probabilistic linguistic ELECTRE III method. Applied Soft Computing, 80, 450–464.
 
Liao, H.C., Mi, X.M., Xu, Z.S. (2020). A survey of decision-making methods with probabilistic linguistic information: bibliometrics, preliminaries, methodologies, applications and future directions. Fuzzy Optimization and Decision Making, 19(1), 81–134.
 
Lin, M.W., Chen, Z.Y., Liao, H.C., Xu, Z.S. (2019). ELECTRE II method to deal with probabilistic linguistic term sets and its application to edge computing. Nonlinear Dynamics, 96(3), 2125–2143.
 
Liu, P.D., Li, Y. (2018b). The PROMETHEE II method based on probabilistic linguistic information and their application to decision making. Informatica, 29(2), 303–320.
 
Liu, P.D., Li, Y. (2019). An extended MULTIMOORA method for probabilistic linguistic multi-criteria group decision making based on prospect theory. Computers & Industrial Engineering, 136, 528–545.
 
Liu, P.D., Teng, F. (2019). Probabilistic linguistic TODIM method for selecting products through online product reviews. Information Sciences, 485, 441–455.
 
Liu, H.B., Jiang, L., Xu, Z.S. (2018). Entropy measures of probabilistic linguistic term sets. International Journal of Computational Intelligence Systems, 11(1), 45–57.
 
Liu, P.D., Li, Y., Teng, F. (2019). Bidirectional projection method for probabilistic linguistic multi-criteria group decision-making based on power average operator. International Journal of Fuzzy Systems, 21(8), 2340–2353.
 
Mao, X.B., Wu, M., Dong, J.Y., Wan, S.P., Jin, Z. (2019). A new method for probabilistic linguistic multi-attribute group decision making: application to the selection of financial technologies. Applied Soft Computing, 77, 155–175.
 
Mi, X.M., Liao, H.C., Wu, X.L., Xu, Z.S. (2020). Probabilistic linguistic information fusion: a survey on aggregation operators in terms of principles, definitions, classifications, applications, and challenges. International Journal of Intelligent Systems, 35(3), 529–556.
 
Pang, Q., Xu, Z.S., Wang, H. (2016). Probabilistic linguistic term sets in multi-attribute group decision making. Information Sciences, 369, 128–143.
 
Peng, H.G., Wang, J.Q., Zhang, H.Y. (2020). Multi-criteria outranking method based on probability distribution with probabilistic linguistic information. Computers & Industrial Engineering, 141. https://doi.org/10.1016/j.cie.2020.106318.
 
Rodríguez, R.M., Martínez, L., Herrera, F. (2012). Hesitant fuzzy linguistic term sets for decision making. IEEE Transaction on Fuzzy Systerms, 20(1), 109–119.
 
Wan, S.P., Xu, G.L., Wang, F., Dong, J.Y. (2015). A new method for Atanassov’s interval-valued intuitionistic fuzzy MAGDM with incomplete attribute weight information. Information Sciences, 316, 329–347.
 
Wu, X.L., Liao, H.C. (2018). An approach to quality function deployment based on probabilistic linguistic term sets and ORESTE method for multi-expert multi-criteria decision making. Information Fusion, 43, 13–26.
 
Wu, X.L., Liao, H.C. (2019). A consensus-based probabilistic linguistic gained and lost dominance score method. European Journal of Operational Research, 272(3), 1017–1027.
 
Wu, X.L., Liao, H.C., Xu, Z.S., Hafezalkotob, A., Herrera, F. (2018). Probabilistic linguistic MULTIMOORA: a multi-criteria decision making method based on the probabilistic linguistic expectation function and the improved Borda rule. IEEE transactions on Fuzzy Systems, 26, 3688–3702.
 
Wu, X.L., Zhang, C., Jiang, L.S., Chang, H.L. (2019). An integrated method with PROMETHEE and conflict analysis for qualitative and quantitative decision-making: case study of site selection for wind power plants. Cognitive Computation, 272(1), 1017–1027.
 
Xu, Z.S. (2004). A method based on linguistic aggregation operators for group decision making with linguistic preference relations. Information Sciences, 166(1), 19–30.
 
Xu, Z.S. (2005). Deviation measures of linguistic preference relations in group decision making. Omega, 33(3), 249–254.
 
Xu, G.L., Wan, S.P., Wang, F., Dong, J.Y., Zeng, Y.F. (2016). Mathematical programming methods for consistency and consensus in group decision making with intuitionistic fuzzy preference relations. Knowledge-Based Systems, 98, 30–43.
 
Xu, Z.S., Luo, S.Q., Liao, H.C. (2019). A probabilistic linguistic PROMETHEE method and its application in medical service. Journal of System and Engineering, 34, 760–769.
 
Zadeh, L.A. (1975). The concept of a linguistic variable and its application to approximate reasoning-II. Information Sciences, 8, 301–353.
 
Zhang, X.L. (2018). A novel probabilistic linguistic approach for large-scale group decision making with incomplete weight information. International Journal of Fuzzy Systems, 20(7), 2245–2256.
 
Zhang, Y.X., Xu, Z.S., Wang, H., Liao, H. (2016). Consistency-based risk assessment with probabilistic linguistic preference relation. Applied Soft Computing, 49, 817–833.
 
Zhang, Y.X., Xu, Z.S., Liao, H.C. (2017). A consensus process for group decision making with probabilistic linguistic preference relations. Information Sciences, 414, 260–275.
 
Zhang, H.H., Kou, G., Peng, Y. (2019). Soft consensus cost models for group decision making and economic interpretations. European Journal of Operational Research, 277(3), 964–980.
 
Zhang, X.F., Gou, X.J., Xu, Z.S., Liao, H.C. (2019). A projection method for multiple attribute group decision making with probabilistic linguistic term sets. International Journal of Machine Learning and Cybernetics, 10(9), 2515–2528.

Biographies

Xu Gai-li

G. Xu is an associate professor at College of Science, Guilin University of Technology. She received her master’s degree from school of Mathematics and Information Science, Guangxi University, China, 2007, and earned her PhD degree from College of Information Technology, Jiangxi University of Finance and Economics, China, 2017. Up until now, she has contributed 10 journal articles to professional journals. Her research interests include fuzzy mathematics and decision making.

Wan Shu-Ping

S. Wan is a professor at College of Information Technology, Jiangxi University of Finance and Economics. He received his PhD degree from College of Information Technology, Nankai University, China, 2005. Up until now, he has contributed over 80 journal articles to professional journals. His research interests include decision making, information fusion and supply chain management.

Dong Jiu-Ying
jiuyingdong@126.com

J. Dong is a professor at School of Statistics, Jiangxi University of Finance and Economics. She received her PhD degree from School of Mathematics Sciences, Nankai University, China, 2013. Up until now, she has contributed over 30 journal articles to professional journals. Her research interests include decision making and graph theory.


Reading mode PDF XML

Table of contents
  • 1 Introduction
  • 2 Preliminaries
  • 3 A Hesitancy Index of a PLTS and an Approach to Ranking PLTSs
  • 4 Entropy and Cross Entropy of PLTSs
  • 5 A Novel Method for MAGDM with PLTSs
  • 6 A Case Study
  • 7 Conclusions
  • A Appendix
  • B Appendix
  • C Appendix
  • D Appendix
  • Acknowledgements
  • References
  • Biographies

Copyright
© 2020 Vilnius University
by logo by logo
Open access article under the CC BY license.

Keywords
Multi-attribute decision making Probabilistic linguistic term set Entropy Cross entropy

Funding
This research was supported by the National Natural Science Foundation of China (Nos. 71740021 and 11861034), “Thirteen five” Programming Project of Jiangxi Province Social Science (No. 18GL13), the Humanities Social Science Programming Project of Ministry of Education of China (No. 20YGC1198), the Natural Science Foundation of Jiangxi Province of China (No. 20192BAB207012), and the Science and Technology Project of Jiangxi Province Educational Department of China (No. GJJ190251), the Natural Science Foundation of Guangxi Province of China (No. 2019GXNSFAA245031) and the Doctoral Scientific Research Foundation of Guilin University of Technology (No. GUTQDJJ2007033).

Metrics
since January 2020
1810

Article info
views

729

Full article
views

1012

PDF
downloads

291

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Tables
    8
  • Theorems
    6
Table 1
Results obtained by different aggregation operators.
Table 2
Distancesbetween PLTSs obtained by different distance measures.
Table 3
Linguistic variables corresponding to linguistic terms.
Table 4
Probabilistic linguistic decision matrices ${\boldsymbol{U}_{1}}$, ${\boldsymbol{U}_{2}}$ and ${\boldsymbol{U}_{3}}$.
Table 5
Normalized ordered decision matrices ${\bar{\boldsymbol{U}}_{1}}$, ${\bar{\boldsymbol{U}}_{2}}$ and ${\bar{\boldsymbol{U}}_{3}}$.
Table 6
The symmetric cross entropies between ${\bar{\boldsymbol{U}}_{1}}$, ${\bar{\boldsymbol{U}}_{2}}$ and ${\bar{\boldsymbol{U}}_{3}}$.
Table 7
Collective normalized ordered matrix $\bar{\boldsymbol{U}}$.
Table 8
The Borda’s scores of alternatives for each alternatives.
Theorem 1.
Theorem 2.
Theorem 3.
Theorem 4.
Theorem 5.
Theorem 6.
Table 1
Results obtained by different aggregation operators.
Operators Results PLTS Information lost Information distortion
PLWA (Pang et al., 2016) $\{{s_{0}},{s_{0.2}},{s_{0.4}},{s_{0.6}},{s_{0.75}},{s_{0.95}},{s_{1.15}},{s_{1.5}},{s_{1.7}},{s_{2.25}}\}$ No Yes Yes
GPLHWA (Mao et al., 2019) $\{{s_{1.85}}(0.14),{s_{2.00}}(0.16),{s_{3}}(1.36)\}$ No Yes Yes
PL-WAA
(Zhang, 2018) $\begin{array}{l}\{{s_{-3}}(0.064),{s_{-2}}(0.064),{s_{-1}}(0.064),{s_{0}}(0.064),\\ {} {s_{1}}(0.084),{s_{2}}(0.424),{s_{3}}(0.254)\}\end{array}$ Yes Yes Yes
PLWAM in this paper $\{{s_{1.85}}(0.04),{s_{2.00}}(0.25),{s_{3}}(0.71)\}$ Yes No No
Table 2
Distancesbetween PLTSs obtained by different distance measures.
PLTSs Different distance measures Distances between PLTSs
${\bar{L}_{1}}(p)=\{{s_{2}}(0.8),{s_{3}}(0.2)\}$
${\bar{L}_{2}}(p)=\{{s_{2}}(0.5),{s_{3}}(0.5)\}$
Zhang’s distance (Zhang et al., 2016) ${d_{Z}}({\bar{L}_{1}}(p),{\bar{L}_{2}}(p))=0$
The Manhattan distance proposed in this paper ${d_{M}}({\bar{L}_{1}}(p),{\bar{L}_{2}}(p))=0.075>0$
The Euclidean distance proposed in this paper ${d_{E}}({\bar{L}_{1}}(p),{\bar{L}_{2}}(p))=0.15>0$
${\bar{L}_{3}}(p)=\{{s_{0}}(0.4),{s_{1}}(0.6)\}$ ${\bar{L}_{4}}(p)=\{{s_{0}}(0.8),{s_{3}}(0.2)\}$ Pang’s distance (Pang et al., 2016) ${d_{P}}({\bar{L}_{3}}(p),{\bar{L}_{4}}(p))=0$
The Manhattan distance proposed in this paper ${d_{M}}({\bar{L}_{3}}(p),{\bar{L}_{4}}(p))=0.3083>0$
The Euclidean distance proposed in this paper ${d_{E}}({\bar{L}_{3}}(p),{\bar{L}_{4}}(p))=0.3308>0$
${\bar{L}_{5}}(p)=\{{s_{-3}}(0.8),{s_{1}}(0.2)\}$
${\bar{L}_{6}}(p)=\{{s_{-3}}(0.7333),{s_{0}}(0.2667)\}$
The Mao’s Euclidean distance (Mao et al., 2019) $d({\bar{L}_{5}}(p),{\bar{L}_{6}}(p))=0$
The Manhattan distance proposed in this paper ${d_{M}}({\bar{L}_{5}}(p),{\bar{L}_{6}}(p))=0.1208>0$
The Euclidean distance proposed in this paper ${d_{E}}({\bar{L}_{5}}(p),{\bar{L}_{6}}(p))=0.1359>0$
Note: The considered LTS in Table 2 is $S=\{{s_{-3}},{s_{-2}},{s_{-1}},{s_{0}},{s_{1}},{s_{2}},{s_{3}}\}$.
Table 3
Linguistic variables corresponding to linguistic terms.
Linguistic variables Linguistic terms Linguistic variables Linguistic terms
Very bad ${s_{-4}}$ Slightly good ${s_{1}}$
Bad ${s_{-3}}$ A little good ${s_{2}}$
A little bad ${s_{-2}}$ Good ${s_{3}}$
Slightly bad ${s_{-1}}$ Very good ${s_{4}}$
Medium ${s_{0}}$
Table 4
Probabilistic linguistic decision matrices ${\boldsymbol{U}_{1}}$, ${\boldsymbol{U}_{2}}$ and ${\boldsymbol{U}_{3}}$.
${u_{1}}$ ${u_{2}}$ ${u_{3}}$ ${u_{4}}$
${d_{1}}$ ${A_{1}}$ $\{{s_{-2}}(0.4),{s_{1}}(0.5)\}$ $\{{s_{2}}(0.6),{s_{4}}(0.4)\}$ $\{{s_{0}}(1)\}$ $\{{s_{-2}}(0.4),{s_{-1}}(0.6)\}$
${A_{2}}$ $\{{s_{4}}(1)\}$ $\{{s_{2}}(0.4),{s_{4}}(0.5)\}$ $\{{s_{0}}(0.3),{s_{1}}(0.3),{s_{2}}(0.4)\}$ $\{{s_{2}}(0.3),{s_{3}}(0.7)\}$
${A_{3}}$ $\{{s_{1}}(0.3),{s_{2}}(0.7)\}$ $\{{s_{1}}(0.4),{s_{2}}(0.3),{s_{3}}(0.3)\}$ $\{{s_{2}}(1)\}$ $\{{s_{-1}}(0.8),{s_{1}}(0.2)\}$
${A_{4}}$ $\{{s_{-1}}(1)\}$ $\{{s_{1}}(0.6),{s_{2}}(0.4)\}$ $\{{s_{0}}(0.5),{s_{2}}(0.5)\}$ $\{{s_{3}}(1)\}$
${d_{2}}$ ${A_{1}}$ $\{{s_{-1}}(0.3),{s_{1}}(0.7)\}$ $\{{s_{2}}(0.4),{s_{3}}(0.2),{s_{4}}(0.4)\}$ $\{{s_{0}}(0.7),{s_{1}}(0.3)\}$ $\{{s_{-1}}(1)\}$
${A_{2}}$ $\{{s_{3}}(0.4),{s_{4}}(0.5)\}$ $\{{s_{3}}(1)\}$ $\{{s_{3}}(0.5),{s_{4}}(0.5)\}$ $\{{s_{0}}(0.4),{s_{2}}(0.6)\}$
${A_{3}}$ $\{{s_{0}}(0.7),{s_{1}}(0.3)\}$ $\{{s_{1}}(0.6),{s_{2}}(0.3)\}$ $\{{s_{1}}(0.3),{s_{2}}(0.7)\}$ $\{{s_{-1}}(1)\}$
${A_{4}}$ $\{{s_{-2}}(1)\}$ $\{{s_{1}}(0.5),{s_{2}}(0.4)\}$ $\{{s_{-1}}(1)\}$ $\{{s_{1}}(0.1),{s_{2}}(0.2),{s_{3}}(0.7)\}$
${d_{3}}$ ${A_{1}}$ $\{{s_{0}}(0.4),{s_{1}}(0.6)\}$ $\{{s_{3}}(0.6),{s_{4}}(0.4)\}$ $\{{s_{-1}}(0.2),{s_{0}}(0.8)\}$ $\{{s_{-1}}(1)\}$
${A_{2}}$ $\{{s_{3}}(0.4),{s_{4}}(0.6)\}$ $\{{s_{1}}(0.3),{s_{2}}(0.7)\}$ $\{{s_{3}}(0.6),{s_{4}}(0.4)\}$ $\{{s_{2}}(1)\}$
${A_{3}}$ $\{{s_{0}}(0.6),{s_{1}}(0.4)\}$ $\{{s_{1}}(1)\}$ $\{{s_{-1}}(0.2),{s_{2}}(0.8)\}$ $\{{s_{-1}}(0.7),{s_{1}}(0.3)\}$
${A_{4}}$ $\{{s_{-4}}(0.4),{s_{-2}}(0.4)\}$ $\{{s_{1}}(0.5),{s_{2}}(0.5)\}$ $\{{s_{-1}}(0.8),{s_{0}}(0.2)\}$ $\{{s_{2}}(1)\}$
Table 5
Normalized ordered decision matrices ${\bar{\boldsymbol{U}}_{1}}$, ${\bar{\boldsymbol{U}}_{2}}$ and ${\bar{\boldsymbol{U}}_{3}}$.
${u_{1}}$ ${u_{2}}$ ${u_{3}}$ ${u_{4}}$
${d_{1}}$ ${A_{1}}$ $\{{s_{-2}}(4/9),{s_{-2}}(0),{s_{1}}(5/9)\}$ $\{{s_{2}}(0),{s_{2}}(0.6),{s_{4}}(0.4)\}$ $\{{s_{0}}(0),{s_{0}}(0),{s_{0}}(1)\}$ $\{{s_{-2}}(0.4),{s_{-1}}(0.6),{s_{-2}}(0)\}$
${A_{2}}$ $\{{s_{4}}(0),{s_{4}}(0),{s_{4}}(1)\}$ $\{{s_{2}}(0),{s_{2}}(4/9),{s_{4}}(5/9)\}$ $\{{s_{0}}(0.3),{s_{1}}(0.3),{s_{2}}(0.4)\}$ $\{{s_{2}}(0),{s_{2}}(0.3),{s_{3}}(0.7)\}$
${A_{3}}$ $\{{s_{1}}(0),{s_{1}}(0.3),{s_{2}}(0.7)\}$ $\{{s_{1}}(0.4),{s_{2}}(0.3),{s_{3}}(0.3)\}$ $\{{s_{2}}(0),{s_{2}}(0),{s_{2}}(1)\}$ $\{{s_{-1}}(0.8),{s_{-1}}(0),{s_{1}}(0.2)\}$
${A_{4}}$ $\{{s_{-1}}(1),{s_{-1}}(0),{s_{-1}}(0)\}$ $\{{s_{1}}(0),{s_{1}}(0.6),{s_{2}}(0.4)\}$ $\{{s_{0}}(0),{s_{0}}(0.5),{s_{2}}(0.5)\}$ $\{{s_{3}}(0),{s_{3}}(0),{s_{3}}(1)\}$
${d_{2}}$ ${A_{1}}$ $\{{s_{-1}}(0.3),{s_{-1}}(0),{s_{1}}(0.7)\}$ $\{{s_{3}}(0.2),{s_{2}}(0.4),{s_{4}}(0.4)\}$ $\{{s_{0}}(0),{s_{0}}(0.7),{s_{1}}(0.3)\}$ $\{{s_{-1}}(1),{s_{-1}}(0),{s_{-1}}(0)\}$
${A_{2}}$ $\{{s_{3}}(0.),{s_{3}}(4/9),{s_{4}}(5/9)\}$ $\{{s_{3}}(0),{s_{3}}(0),{s_{3}}(1)\}$ $\{{s_{3}}(0),{s_{3}}(0.5),{s_{4}}(0.5)\}$ $\{{s_{0}}(0),{s_{0}}(0.4),{s_{2}}(0.6)\}$
${A_{3}}$ $\{{s_{0}}(0),{s_{0}}(0.7),{s_{1}}(0.3)\}$ $\{{s_{1}}(0),{s_{1}}(6/9),{s_{2}}(3/9)\}$ $\{{s_{1}}(0),{s_{1}}(0.3),{s_{2}}(0.7)\}$ $\{{s_{-1}}(1),{s_{-1}}(0),{s_{-1}}(0)\}$
${A_{4}}$ $\{{s_{-2}}(1),{s_{-2}}(0),{s_{-2}}(0)\}$ $\{{s_{1}}(0),{s_{1}}(5/9),{s_{2}}(4/9)\}$ $\{{s_{-1}}(1),{s_{-1}}(0),{s_{-1}}(0)\}$ $\{{s_{1}}(0.1),{s_{2}}(0.2),{s_{3}}(0.7)\}$
${d_{3}}$ ${A_{1}}$ $\{{s_{0}}(0),{s_{0}}(0.4),{s_{1}}(0.6)\}$ $\{{s_{3}}(0),{s_{3}}(0.6),{s_{4}}(0.4)\}$ $\{{s_{-1}}(0.2),{s_{-1}}(0),{s_{0}}(0.8)\}$ $\{{s_{-1}}(1),{s_{-1}}(0),{s_{-1}}(0)\}$
${A_{2}}$ $\{{s_{3}}(0),{s_{3}}(0.4),{s_{4}}(0.6)\}$ $\{{s_{1}}(0),{s_{1}}(0.3),{s_{2}}(0.7)\}$ $\{{s_{3}}(0),{s_{4}}(0.4),{s_{3}}(0.6)\}$ $\{{s_{0}}(0),{s_{0}}(0.1),{s_{1}}(0.9)\}$
${A_{3}}$ $\{{s_{0}}(0),{s_{0}}(0.6),{s_{1}}(0.4)\}$ $\{{s_{1}}(0),{s_{1}}(0),{s_{1}}(1)\}$ $\{{s_{-1}}(0.2),{s_{-1}}(0),{s_{2}}(0.8)\}$ $\{{s_{-1}}(0.7),{s_{-1}}(0),{s_{1}}(0.3)\}$
${A_{4}}$ $\{{s_{-4}}(0.5),{s_{-2}}(0.5),{s_{-4}}(0)\}$ $\{{s_{1}}(0),{s_{1}}(0.5),{s_{2}}(0.5)\}$ $\{{s_{-1}}(0.8),{s_{-1}}(0),{s_{0}}(0.2)\}$ $\{{s_{2}}(0),{s_{2}}(0),{s_{2}}(1)\}$
Table 6
The symmetric cross entropies between ${\bar{\boldsymbol{U}}_{1}}$, ${\bar{\boldsymbol{U}}_{2}}$ and ${\bar{\boldsymbol{U}}_{3}}$.
${\bar{\boldsymbol{U}}_{i}}$ ${\bar{\boldsymbol{U}}_{j}}$ $D({\bar{\boldsymbol{U}}_{k}}$, ${\bar{\boldsymbol{U}}_{\delta }})$ ${\bar{\boldsymbol{U}}_{1}}$ ${\bar{\boldsymbol{U}}_{2}}$ ${\bar{\boldsymbol{U}}_{3}}$
${\bar{\boldsymbol{U}}_{1}}$ – 0.2804 0.3965
${\bar{\boldsymbol{U}}_{2}}$ 0.2804 – 0.084
${\bar{\boldsymbol{U}}_{3}}$ 0.3965 0.084 –
Table 7
Collective normalized ordered matrix $\bar{\boldsymbol{U}}$.
${u_{1}}$ ${u_{2}}$ ${u_{3}}$ ${u_{4}}$
${d_{1}}$ ${A_{1}}$ $\begin{array}[t]{l}\{{s_{-1.03}}(0.05),{s_{-0.59}}(0.08),\\ {} {s_{-0.42}}(0.12),{s_{-0.03}}(0.19),\\ {} {s_{0.26}}(0.07),{s_{0.58}}(0.10),\\ {} {s_{0.71}}(0.16),{s_{1}}(0.23)\}\end{array}$ $\begin{array}[t]{l}\{{s_{2.40}}(0.14),{s_{2.66}}(0.07),\\ {} {s_{4}}(0.78)\}\end{array}$ $\begin{array}[t]{l}\{{s_{-0.30}}(0.14),{s_{0}}(0.56),\\ {} {s_{0.01}}(0.06),{s_{0.28}}(0.24)\}\end{array}$ $\{{s_{-1.40}}(0.4),{s_{-1}}(0.6)\}$
${A_{2}}$ $\{{s_{4}}(1)\}$ $\begin{array}[t]{l}\{{s_{2.10}}(0.13),{s_{2.32}}(0.31),\\ {} {s_{4}}(0.56)\}\end{array}$ $\begin{array}[t]{l}\{{s_{2.19}}(0.09),{s_{2.41}}(0.09),\\ {} {s_{2.66}}(0.12),{s_{4}}(0.7)\}\end{array}$ $\begin{array}[t]{l}\{{s_{1.02}}(0.01),{s_{1.29}}(0.11),\\ {} {s_{1.50}}(0.00),{s_{1.72}}(0.16),\\ {} {s_{1.78}}(0.03),{s_{1.98}}(0.25),\\ {} {s_{2.14}}(0.04),{s_{2.31}}(0.38)\}\end{array}$
${A_{3}}$ $\begin{array}[t]{l}\{{s_{0.46}}(0.13),{s_{0.71}}(0.05),\\ {} {s_{0.77}}(0.08),{s_{1}}(0.04),\\ {} {s_{1.02}}(0.29),{s_{1.23}}(0.13),\\ {} {s_{1.29}}(0.20),{s_{1.48}}(0.08)\}\end{array}$ $\begin{array}[t]{l}\{{s_{1}}(0.27),{s_{1.29}}(1.33),\\ {} {s_{1.47}}(0.20),{s_{1.72}}(0.10),\\ {} {s_{2.10}}(0.20),{s_{2.30}}(0.10)\}\end{array}$ $\begin{array}[t]{l}\{{s_{1.03}}(0.06),{s_{1.32}}(0.14),\\ {} {s_{1.78}}(0.24),{s_{2.00}}(0.56)\}\end{array}$ $\begin{array}[t]{l}\{{s_{-1}}(0.56),{s_{-0.25}}(0.24),\\ {} {s_{-0.02}}(0.14),{s_{0.59}}(0.06)\}\end{array}$
${A_{4}}$ $\{{s_{-2.1}}(0.5),{s_{-1.56}}(0.5)\}$ $\begin{array}[t]{l}\{{s_{1}}(0.17),{s_{1.30}}(0.13),\\ {} {s_{1.37}}(0.17),{s_{1.47}}(0.11),\\ {} {s_{1.63}}(0.13),{s_{1.72}}(0.09),\\ {} {s_{1.78}}(0.11),{s_{2.00}}(0.09)\}\end{array}$ $\begin{array}[t]{l}\{{s_{-0.55}}(0.40),{s_{-0.23}}(0.10),\\ {} {s_{0.61}}(0.40),{s_{0.85}}(0.10)\}\end{array}$ $\begin{array}[t]{l}\{{s_{2.35}}(0.1),{s_{2.51}}(0.2),\\ {} {s_{2.75}}(0.7)\}\end{array}$
Table 8
The Borda’s scores of alternatives for each alternatives.
${d_{1}}$ ${d_{2}}$ ${d_{3}}$ Borda’s scores
${A_{1}}$ 1 4 3 8
${A_{2}}$ 4 2 1 7
${A_{3}}$ 3 3 2 8
${A_{4}}$ 2 1 4 7
Theorem 1.
Given n PLTSs ${L_{1}}(p),{L_{2}}(p),\dots ,{L_{n}}(p)$, the aggregated value by the PLWAM operator is also a PLTS as
(4)
\[\begin{aligned}{}& \mathrm{PLWAM}\big({L_{1}}(p),{L_{2}}(p),\dots ,{L_{n}}(p)\big)\\ {} & \hspace{1em}={g^{-1}}\Bigg(\bigcup \limits_{\genfrac{}{}{0pt}{}{{k_{j}}=1,2,\dots ,\mathrm{\# }{L_{j}}(p)}{j=1,2,\dots ,n}}\Bigg\{\Bigg(1-{\prod \limits_{j=1}^{n}}{\big(1-g\big({L_{j}^{({k_{j}})}}\big)\big)^{{\omega _{j}}}}\Bigg)\big({p_{1}^{({k_{1}})}}{p_{2}^{({k_{2}})}}\cdots {p_{n}^{({k_{n}})}}\big)\Bigg\}\Bigg).\end{aligned}\]
Furthermore,
(5)
\[ \sum \limits_{\genfrac{}{}{0pt}{}{{k_{j}}=1,2,\dots ,\mathrm{\# }{L_{j}}(p)}{j=1,2,\dots ,n}}{p_{1}^{({k_{1}})}}{p_{2}^{({k_{2}})}}\cdots {p_{n}^{({k_{n}})}}=1.\]
Theorem 2.
Given n PLTSs ${L_{1}}(p),{L_{2}}(p),\dots ,{L_{n}}(p)$, the aggregated value by the PLWGM operator is also a PLTS as follows
(7)
\[\begin{aligned}{}& \mathrm{PLWGM}\big({L_{1}}(p),{L_{2}}(p),\dots ,{L_{n}}(p)\big)\\ {} & \hspace{1em}={g^{-1}}\Bigg(\bigcup \limits_{\genfrac{}{}{0pt}{}{{k_{j}}=1,2,\dots ,\mathrm{\# }{L_{j}}(p)}{j=1,2,\dots ,n}}\Bigg\{\Bigg({\prod \limits_{j=1}^{n}}{\big(g\big({L_{j}^{({k_{j}})}}\big)\big)^{{\omega _{j}}}}\Bigg)\big({p_{1}^{({k_{1}})}}{p_{2}^{({k_{2}})}}\cdots {p_{n}^{({k_{n}})}}\big)\Bigg\}\Bigg),\end{aligned}\]
and
(8)
\[ \sum \limits_{\genfrac{}{}{0pt}{}{{k_{j}}=1,2,\dots ,\mathrm{\# }{L_{j}}(p)}{j=1,2,\dots ,n}}{p_{1}^{({k_{1}})}}{p_{2}^{({k_{2}})}}\cdots {p_{n}^{({k_{n}})}}=1.\]
Theorem 3.
For two PLTSs ${\bar{L}_{1}}(p)$ and ${\bar{L}_{2}}(p)$ with $\mathrm{\# }{\bar{L}_{1}}(p)=\mathrm{\# }{\bar{L}_{2}}(p)$, ${d_{M}}$ and ${d_{E}}$ are the novel Manhattan and Euclidean distances of PLTSs, respectively. They satisfy:
  • (i) $0\leqslant {d_{M}}\big({\bar{L}_{1}}(p),{\bar{L}_{2}}(p)\big),{d_{E}}\big({\bar{L}_{1}}(p),{\bar{L}_{2}}(p)\big)\leqslant 1$;
  • (ii) ${d_{M}}\big({\bar{L}_{1}}(p),{\bar{L}_{2}}(p)\big)={d_{E}}\big({\bar{L}_{1}}(p),{\bar{L}_{2}}(p)\big)=0$ if and only if ${\bar{L}_{1}}(p)={\bar{L}_{2}}(p)$;
  • (iii) ${d_{M}}\big({\bar{L}_{1}}(p),{\bar{L}_{2}}(p)\big)={d_{M}}\big({\bar{L}_{2}}(p),{\bar{L}_{1}}(p)\big)$ and ${d_{E}}\big({\bar{L}_{1}}(p),{\bar{L}_{2}}(p)\big)={d_{E}}\big({\bar{L}_{2}}(p),{\bar{L}_{1}}(p)\big)$.
Theorem 4.
Suppose $f:[0,1]\to [0,1]$ is a strictly concave function and satisfies the following conditions:
  • (i) $f(t)=f(1-t)$ for any $t\in [0,1]$;
  • (ii) $f(1)=f(0)=0$;
  • (iii) f is monotone increasing in $[0,0.5]$ and monotone decreasing in $[0.5,1.0]$. Then the function
    (17)
    \[ {E_{F}}\big(\bar{L}(p)\big)=f\big(e\big(\bar{L}(p)\big)\big)\]
    is a fuzzy entropy of the PLTS $\bar{L}(p)$.
Theorem 5.
Given a normalized ordered PLTS $\bar{L}(p)$. The hesitancy index
(19)
\[ {E_{h}}\big(\bar{L}(p)\big)=h\big(\bar{L}(p)\big)=\left\{\begin{array}{l@{\hskip4.0pt}l}2{\textstyle\textstyle\sum _{i=1}^{\mathrm{\# }\bar{L}(p)}}|g({L^{(i)}})-\bar{g}|{p_{i}}\hspace{1em}& \mathrm{\# }\bar{L}(p)>1,\\ {} 0\hspace{1em}& \mathrm{\# }\bar{L}(p)=1,\end{array}\right.\]
is a hesitancy entropy, where $\bar{g}=\frac{1}{\mathrm{\# }\bar{L}(p)}{\textstyle\sum _{i=1}^{\mathrm{\# }\bar{L}(p)}}g({L^{(i)}})$.
Theorem 6.
Given two normalized ordered PLTSs ${\bar{L}_{1}}(p)$ and ${\bar{L}_{2}}(p)$, the function
(24)
\[\begin{aligned}{}CE\big({\bar{L}_{1}}(p),{\bar{L}_{2}}(p)\big)& =e\big({\bar{L}_{1}}(p)\big)\ln \frac{2e({\bar{L}_{1}}(p))}{e({\bar{L}_{1}}(p))+e({\bar{L}_{2}}(p))}\\ {} & \hspace{1em}+\big(1-e\big({\bar{L}_{1}}(p)\big)\big)\ln \frac{2(1-e({\bar{L}_{1}}(p)))}{1-e({\bar{L}_{1}}(p))+1-e({\bar{L}_{2}}(p))}\end{aligned}\]
is a cross entropy.

INFORMATICA

  • Online ISSN: 1822-8844
  • Print ISSN: 0868-4952
  • Copyright © 2023 Vilnius University

About

  • About journal

For contributors

  • OA Policy
  • Submit your article
  • Instructions for Referees
    •  

    •  

Contact us

  • Institute of Data Science and Digital Technologies
  • Vilnius University

    Akademijos St. 4

    08412 Vilnius, Lithuania

    Phone: (+370 5) 2109 338

    E-mail: informatica@mii.vu.lt

    https://informatica.vu.lt/journal/INFORMATICA
Powered by PubliMill  •  Privacy policy