1 Introduction
Fuzzy set (Zadeh,
1965) was first introduced for assigning each criterion to a vague or an uncertain information with the help of membership degree which ranges between 0 and 1. Then, intuitionistic fuzzy set (IFS) (Atanassov,
1986) was proposed to take into account both membership and non-membership degrees such that their sum is less than or equal to 1. This property is used to describe an object which satisfies a criterion more definitively and further precisely compared with that in fuzzy environment (Cong,
2014; Si
et al.,
2019).
Since the sum of both satisfaction and dissatisfaction degrees of an IFS is not always less than or equal to 1, it would be interesting to consider the sum of their square to be less than or equal to 1. In this regard, the concept of IFS clearly fails to deal with such a situation. In order to clarify this fact, we suppose that an expert would like to express his/her preference in a decision making situation where the degree of an object satisfying a criterion is
$\frac{\sqrt{3}}{2}$, and the degree of dissatisfying is
$\frac{1}{2}$. In this case, we easily see that
$\frac{\sqrt{3}}{2}+\frac{1}{2}\geqslant 1$, that is, the current case is not able to be properly described by the use of IFS concept. Instead, the satisfaction of
${\big(\frac{\sqrt{3}}{2}\big)^{2}}+{\big(\frac{1}{2}\big)^{2}}\leqslant 1$ shows the need of employing another concept for describing the latter-mentioned case, and that concept is nothing else except the concept of Pythagorean fuzzy set (PFS) (Yager,
2013).
Besides that, we may consider a more general situation in which an expert would like to express his/her preference by the degree of an object satisfying a criterion as
$\frac{\sqrt[3]{7}}{2}$, and the degree of dissatisfying is
$\frac{1}{2}$. Then, it is easily seen that
$\frac{\sqrt[3]{7}}{2}+\frac{1}{2}\geqslant 1$ and also
${\big(\frac{\sqrt[3]{7}}{2}\big)^{2}}+{\big(\frac{1}{2}\big)^{2}}\geqslant 1$ which indicate that the current case cannot be described with the help of concepts of IFS and PFS. This is while,
${\big(\frac{\sqrt[3]{7}}{2}\big)^{3}}+{\big(\frac{1}{2}\big)^{3}}\leqslant 1$. Thus, such an implication makes clear the need of defining a more general concept than the IFS and the PFS concepts. Indeed, this fact made a study of introducing a more general concept than that of IFS and PFS which is called p-rung orthopair fuzzy set (p-ROFS) (Yager,
2017).
The p-ROFS concept describes the degree of an object satisfying a criterion together with the degree of dissatisfying such that the sum of p-power of both satisfaction and dissatisfaction degrees is less than or equal to 1. It can be easily deduced that the space of p-ROFS membership grades is greater than those of Pythagorean and intuitionistic membership grades. That is, any Pythagorean or intuitionistic membership grade is also a p-ROFS membership grade, but in general, all p-ROFS membership grades are not in the forms of Pythagorean or intuitionistic membership grades. This obviously implies that we are able to implement the concept of p-ROFS in the cases where we do not allow to implement the concepts of PFS or IFS.
Apart from the above-mentioned advantages, an immediate and very interesting benefit of p-ROFS definition is that it provides the experts with greater freedom in modelling the forms of imprecise information.
There exist a large number of researches that deal mainly with the concept of p-ROFS which has been applied in many different contexts. Liu and Wang (
2018) introduced and investigated the p-rung orthopair fuzzy weighted averaging operator and the p-rung orthpair fuzzy weighted geometric operator. Moreover, the p-rung orthopair fuzzy Bonferroni mean and the p-rung orthopair fuzzy Heronian mean were proposed respectively by Liu and Liu (
2018) and Wei
et al. (
2018). Subsequently, the p-rung orthopair fuzzy Archimedean Bonferroni mean operators were developed by Liu and Wang (
2019). Besides the latter-mentioned application of p-ROFSs, there is an increasing demand and a growing number of researches for other application fields, for instance, Du (
2018) proposed Minkowski-type distance measures for p-ROFS by emphasizing on their application in decision making, and Zhang C.
et al. (
2019,
in press) dealt with additive and multiplicative consistency analysis for p-ROF preference relation.
One of the leading topics of the recent development of p-ROFSs has been the focus on the ranking function which plays an essential role in the decision making problems. The pioneer works in this regard are those proposed by Yager (
2017) and Wei
et al. (
2018) in which
non-algorithmic ranking techniques for p-ROFSs are described. As it will be demonstrated later, Yager’s (
2017) and Wei
et al.’s (
2018) score functions cannot differentiate many p-ROFNs in some situations. The other is that given by Liu and Wang (
2018) as an
algorithmic ranking technique which uses both score and accuracy functions of p-ROFSs. By taking the impact of membership and non-membership degrees together with the hesitation information into account, Peng
et al. (
2018) introduced
another algorithmic ranking technique for p-ROFSs. Anyway, both Liu and Wang’s (
2018) and Peng
et al.’s (
2018) techniques are in algorithmic form and they need definitely more computations than the non-algorithmic techniques. Moreover, since Liu and Wang’s (
2018) technique does not consider the influence of abstention, the information of p-ROFS may be lost. Furthermore, the curve function
$f(x)=\frac{\exp (x)}{\exp (x)+1}$ which appears in Peng
et al.’s (
2018) technique leads to more complexity in the evaluation of the score function.
Regarding the above-mentioned deficiencies of the existing p-ROFS ranking techniques, we are motivated here to investigate an effective score function for p-ROFSs in the form of non-algorithmic ranking technique which is constructed by considering the impact of membership and non-membership degrees together with the hesitation information.
The present contribution is organized as follows: Introduction of p-ROFS concept and a brief review of some preliminaries are given in Section
2. Section
3 deals with reviewing three kinds of p-ROFS ranking orders, and then this section continues with introducing an innovative score function for p-ROFSs which possesses different properties. Section
4 is devoted to the application of p-ROFS score function in multiple criteria decision making (MCDM) problems by emphasizing the superiority of the proposed score function over the other existing ones. Finally, this article concludes in Section
5.
2 The p-Rung Orthopair Fuzzy Set (p-ROFS)
Throughout this section, we are willing to firstly review the concepts of IFS and PFS, and then we will focus mainly on the concept of p-rung orthopair fuzzy set (p-ROFS) and its essential set and algebraical operations.
Definition 1 (See Atanassov, 1986).
Let X be the universe of discourse. An intuitionistic fuzzy set (IFS) on X is defined in terms of
in which ${\mu _{{A_{\mathit{IFS}}}}}$ and ${\nu _{{A_{\mathit{IFS}}}}}$ denote the membership and non-membership functions of ${A_{\mathit{IFS}}}$ such that for any $x\in X$ it holds $0\leqslant {\mu _{{A_{\mathit{IFS}}}}}(x)+{\nu _{{A_{\mathit{IFS}}}}}(x)\leqslant 1$.
Definition 2 (See Zhang and Xu, 2014).
Let X be the universe of discourse. A Pythagorean fuzzy set (PFS) on X is defined in terms of
in which ${\mu _{{A_{\mathit{PFS}}}}}$ and ${\nu _{{A_{\mathit{PFS}}}}}$ denote the membership and non-membership functions of ${A_{\mathit{PFS}}}$ such that for any $x\in X$ it holds $0\leqslant {\mu _{{A_{\mathit{PFS}}}}^{2}}(x)+{\nu _{{A_{\mathit{PFS}}}}^{2}}(x)\leqslant 1$.
Now, if we are interested in describing a situation in which an expert would like to express his/her preference by the degree of an object satisfying a criterion as
$\frac{\sqrt[3]{7}}{2}$, and the degree of dissatisfying as
$\frac{1}{2}$, then we immediately find that none of the existing concepts of IFS and PFS can explain such a situation because of
$\frac{\sqrt[3]{7}}{2}+\frac{1}{2}\leqslant ̸1$ and
${\big(\frac{\sqrt[3]{7}}{2}\big)^{2}}+{\big(\frac{1}{2}\big)^{2}}\leqslant ̸1$. This impulses Yager (
2017) to define a more general concept than the IFS and the PFS concepts which is not only able to improve such a shortcoming, but it is also well-obeyed in this case.
Definition 3 (See Yager, 2017).
Let X be the universe of discourse. A p-rung orthopair fuzzy set (p-ROFS) on X is defined in terms of
in which ${\mu _{{A_{\textit{p-ROFS}\hspace{2.5pt}}}}}$ and ${\nu _{{A_{\textit{p-ROFS}\hspace{2.5pt}}}}}$ denote the membership and non-membership functions of ${A_{\textit{p-ROFS}\hspace{2.5pt}}}$ such that for any $x\in X$ it holds $0\leqslant {\mu _{{A_{\textit{p-ROFS}\hspace{2.5pt}}}}^{p}}(x)+{\nu _{{A_{\textit{p-ROFS}\hspace{2.5pt}}}}^{p}}(x)\leqslant 1$ where $p\in [1,\infty )$.
Moreover, for notational convenience, we name $({\mu _{{A_{\text{p-ROFS}\hspace{2.5pt}}}}}(x),{\nu _{{A_{\text{p-ROFS}\hspace{2.5pt}}}}}(x))$ a p-rung orthopair fuzzy number (p-ROFN), and it is simply indicated hereafter by $({\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}},{\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}})$.
Fig. 1.
Comparison of spaces of p-ROFSs for the parameters $p=1,1.5,2,3,5$.
As can be seen from Fig.
1, the intuitionistic membership degrees are those points being located in the area of the graph
$x+y\leqslant 1$, and the Pythagorean membership degrees are those points being located in the area of the graph
${x^{2}}+{y^{2}}\leqslant 1$. This is while, the p-rung orthopair membership degrees are those points being located in the area of the graph
${x^{p}}+{y^{p}}\leqslant 1$ where
$p\in [1,\infty )$. This implies that the p-rung orthopair membership degrees provide us with a wider representation of non-standard membership degrees than the intuitionistic and the Pythagorean membership degrees. Indeed, any intuitionistic fuzzy number (IFN) and Pythagorean fuzzy number (PFN) can be considered as a p-ROFN, but the reverse is not held, that is, all p-ROFNs are not always IFNs or PFNs.
In what follows, we are interested in reviewing a number of set and algebraical operations on p-ROFNs.
Definition 4 (See Yager, 2017).
For any p-ROFNs ${A_{\textit{p-ROFN}\hspace{2.5pt}}}$ and ${B_{\textit{p-ROFN}\hspace{2.5pt}}}$, the following operations are defined:
In an analogous manner similar to IFNs and PFNs, the subset relation of p-ROFNs is defined as the following:
Definition 5 (See Yager, 2017).
For any p-ROFNs ${A_{\textit{p-ROFN}\hspace{2.5pt}}}$ and ${B_{\textit{p-ROFN}\hspace{2.5pt}}}$, we indicate that ${A_{\textit{p-ROFN}\hspace{2.5pt}}}\subseteq {B_{\textit{p-ROFN}\hspace{2.5pt}}}$ if and only if ${\mu _{{A_{\textit{p-ROFN}\hspace{2.5pt}}}}}\leqslant {\mu _{{B_{\textit{p-ROFN}\hspace{2.5pt}}}}}$ together with ${\nu _{{A_{\textit{p-ROFN}\hspace{2.5pt}}}}}\geqslant {\nu _{{B_{\textit{p-ROFN}\hspace{2.5pt}}}}}$.
Definition 6 (See Yager, 2017).
For any p-ROFNs ${A_{\textit{p-ROFN}\hspace{2.5pt}}}$ and ${B_{\textit{p-ROFN}\hspace{2.5pt}}}$, the following operations are defined:
An immediate consequence from the above definition is that
for any
$k>0$.
3 Ranking Techniques of p-ROFNs
Throughout the present section, we first review the existing ranking techniques of p-ROFN, and in the second stage, we propose a new parametrical score function for p-ROFNs by taking both the membership and the hesitation degree of a p-ROFN into account.
3.1 Non-Algorithmic Ranking Technique
Assume that
${A_{\text{p-ROFN}\hspace{2.5pt}}}=({\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}},{\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}})$ denotes a p-ROFN where
${\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}}$ indicates the proposal, and
${\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}}$ stands for the disagreement. It was Yager (
2017) who first defined the following ranking technique for p-ROFNs:
Definition 7 (See Yager, 2017).
For any p-ROFN ${A_{\textit{p-ROFN}\hspace{2.5pt}}}$, Yager’s score function is constructed as
where $p\in [1,\infty )$, and also $-1\leqslant S{c_{Y}}({A_{\textit{p-ROFN}\hspace{2.5pt}}})\leqslant 1$.
With the help of this setting, we are able to present the comparison rule between the two p-ROFNs ${A_{\text{p-ROFN}\hspace{2.5pt}}}=({\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}},{\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}})$ and ${B_{\text{p-ROFN}\hspace{2.5pt}}}=({\mu _{{B_{\text{p-ROFN}\hspace{2.5pt}}}}},{\nu _{{B_{\text{p-ROFN}\hspace{2.5pt}}}}})$ as the following:
-
• if $S{c_{Y}}({A_{\text{p-ROFN}\hspace{2.5pt}}})<S{c_{Y}}({B_{\text{p-ROFN}\hspace{2.5pt}}})$, then ${A_{\text{p-ROFN}\hspace{2.5pt}}}$ is considered smaller than ${B_{\text{p-ROFN}\hspace{2.5pt}}}$ and denoted by ${A_{\text{p-ROFN}\hspace{2.5pt}}}{\prec _{Y}}{B_{\text{p-ROFN}\hspace{2.5pt}}}$;
-
• if $S{c_{Y}}({A_{\text{p-ROFN}\hspace{2.5pt}}})=S{c_{Y}}({B_{\text{p-ROFN}\hspace{2.5pt}}})$, then we get ${A_{\text{p-ROFN}\hspace{2.5pt}}}{\simeq _{Y}}{B_{\text{p-ROFN}\hspace{2.5pt}}}$.
There exists another p-ROFN score function introduced by Wei
et al. (
2018) which behaves much like the Yager’s (
2017) score function.
Definition 8 (See Wei et al., 2018).
For any p-ROFN ${A_{\textit{p-ROFN}\hspace{2.5pt}}}$, Wei et al.’s score function is constructed as
where $p\in [1,\infty )$, and also $0\leqslant S{c_{W}}({A_{\textit{p-ROFN}\hspace{2.5pt}}})\leqslant 1$.
With the help of this setting, we are able to present another comparison rule between the two p-ROFNs
${A_{\text{p-ROFN}\hspace{2.5pt}}}=({\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}},{\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}})$ and
${B_{\text{p-ROFN}\hspace{2.5pt}}}=({\mu _{{B_{\text{p-ROFN}\hspace{2.5pt}}}}},{\nu _{{B_{\text{p-ROFN}\hspace{2.5pt}}}}})$ as the following:
-
• if $S{c_{W}}({A_{\text{p-ROFN}\hspace{2.5pt}}})<S{c_{W}}({B_{\text{p-ROFN}\hspace{2.5pt}}})$, then ${A_{\text{p-ROFN}\hspace{2.5pt}}}$ is considered smaller than ${B_{\text{p-ROFN}\hspace{2.5pt}}}$ and denoted by ${A_{\text{p-ROFN}\hspace{2.5pt}}}{\prec _{W}}{B_{\text{p-ROFN}\hspace{2.5pt}}}$;
-
• if $S{c_{W}}({A_{\text{p-ROFN}\hspace{2.5pt}}})=S{c_{W}}({B_{\text{p-ROFN}\hspace{2.5pt}}})$, then we get ${A_{\text{p-ROFN}\hspace{2.5pt}}}{\simeq _{W}}{B_{\text{p-ROFN}\hspace{2.5pt}}}$.
3.2 Algorithmic Ranking Techniques
Following Yager’s (
2017) non-algorithmic ranking technique for p-ROFNs, Liu and Wang (
2018) presented an algorithmic ranking technique by considering both score and accuracy functions of p-ROFNs.
Definition 9 (See Liu and Wang, 2018).
For any p-ROFN ${A_{\textit{p-ROFN}\hspace{2.5pt}}}$, the score and accuracy functions are respectively defined by where $p\in [1,\infty )$, and moreover, $-1\leqslant S{c_{\mathit{LW}}}({A_{\textit{p-ROFN}\hspace{2.5pt}}})\leqslant 1$ and $0\leqslant Ac{c_{\mathit{LW}}}({A_{\textit{p-ROFN}\hspace{2.5pt}}})\leqslant 1$.
Using this setting, the comparison rule between the two p-ROFNs
${A_{\text{p-ROFN}\hspace{2.5pt}}}=({\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}},{\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}})$ and
${B_{\text{p-ROFN}\hspace{2.5pt}}}=({\mu _{{B_{\text{p-ROFN}\hspace{2.5pt}}}}},{\nu _{{B_{\text{p-ROFN}\hspace{2.5pt}}}}})$ is considered as the following:
-
• if $S{c_{\mathit{LW}}}({A_{\text{p-ROFN}\hspace{2.5pt}}})<S{c_{\mathit{LW}}}({B_{\text{p-ROFN}\hspace{2.5pt}}})$, then ${A_{\text{p-ROFN}\hspace{2.5pt}}}$ is considered smaller than ${B_{\text{p-ROFN}\hspace{2.5pt}}}$ and denoted by ${A_{\text{p-ROFN}\hspace{2.5pt}}}{\prec _{\mathit{LW}}}{B_{\text{p-ROFN}\hspace{2.5pt}}}$;
-
• if $S{c_{\mathit{LW}}}({A_{\text{p-ROFN}\hspace{2.5pt}}})=S{c_{\mathit{LW}}}({B_{\text{p-ROFN}\hspace{2.5pt}}})$, then
-
– if $Ac{c_{\mathit{LW}}}({A_{\text{p-ROFN}\hspace{2.5pt}}})=Ac{c_{\mathit{LW}}}({B_{\text{p-ROFN}\hspace{2.5pt}}})$, hence we result that ${\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}}={\mu _{{B_{\text{p-ROFN}\hspace{2.5pt}}}}}$ together with ${\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}}={\nu _{{B_{\text{p-ROFN}\hspace{2.5pt}}}}}$, that is, ${A_{\text{p-ROFN}\hspace{2.5pt}}}{\simeq _{\mathit{LW}}}{B_{\text{p-ROFN}\hspace{2.5pt}}}$;
-
– if $Ac{c_{\mathit{LW}}}({A_{\text{p-ROFN}\hspace{2.5pt}}})<Ac{c_{\mathit{LW}}}({B_{\text{p-ROFN}\hspace{2.5pt}}})$, hence we get ${A_{\text{p-ROFN}\hspace{2.5pt}}}{\prec _{\mathit{LW}}}{B_{\text{p-ROFN}\hspace{2.5pt}}}$.
As can be observed from equations (
8) and (
10), Yager’s (
2017) and Liu and Wang’s (
2018) score functions have the same construction. To simplify the next considerations, we will denote both of them with the notation
$S{c_{\mathit{YLW}}}$ instead.
In continuation of Liu and Wang’s (
2018) technique, Peng
et al. (
2018) introduced a ranking technique by taking the impact of membership and non-membership degrees together with the hesitation information into consideration.
Definition 10 (See Peng et al., 2018).
For any p-ROFN ${A_{\textit{p-ROFN}\hspace{2.5pt}}}$, the score function is defined by
where $p\in [1,\infty )$, and moreover, $-1\leqslant S{c_{\mathit{PDG}}}({A_{\textit{p-ROFN}\hspace{2.5pt}}})\leqslant 1$ and ${\pi ^{p}}({A_{\textit{p-ROFN}\hspace{2.5pt}}})=1-{\mu _{{A_{\textit{p-ROFN}\hspace{2.5pt}}}}^{p}}-{\nu _{{A_{\textit{p-ROFN}\hspace{2.5pt}}}}^{p}}$.
Based on the above score function
$S{c_{\mathit{PDG}}}$ and the hesitation information
π, Peng
et al. (
2018) proposed an algorithmic ranking technique as the following:
For any two p-ROFNs ${A_{\text{p-ROFN}\hspace{2.5pt}}}=({\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}},{\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}})$ and ${B_{\text{p-ROFN}\hspace{2.5pt}}}=({\mu _{{B_{\text{p-ROFN}\hspace{2.5pt}}}}},{\nu _{{B_{\text{p-ROFN}\hspace{2.5pt}}}}})$, we deduce that
-
• if $S{c_{\mathit{PDG}}}({A_{\text{p-ROFN}\hspace{2.5pt}}})<S{c_{\mathit{PDG}}}({B_{\text{p-ROFN}\hspace{2.5pt}}})$, then ${A_{\text{p-ROFN}\hspace{2.5pt}}}$ is considered smaller than ${B_{\text{p-ROFN}\hspace{2.5pt}}}$ and denoted by ${A_{\text{p-ROFN}\hspace{2.5pt}}}{\prec _{\mathit{PDG}}}{B_{\text{p-ROFN}\hspace{2.5pt}}}$;
-
• if $S{c_{\mathit{PDG}}}({A_{\text{p-ROFN}\hspace{2.5pt}}})=S{c_{\mathit{PDG}}}({B_{\text{p-ROFN}\hspace{2.5pt}}})$, then
-
– if $\pi ({A_{\text{p-ROFN}\hspace{2.5pt}}})=\pi ({B_{\text{p-ROFN}\hspace{2.5pt}}})$, hence it results that ${\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}}={\mu _{{B_{\text{p-ROFN}\hspace{2.5pt}}}}}$ together with ${\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}}={\nu _{{B_{\text{p-ROFN}\hspace{2.5pt}}}}}$, that is, ${A_{\text{p-ROFN}\hspace{2.5pt}}}{\simeq _{\mathit{PDG}}}{B_{\text{p-ROFN}\hspace{2.5pt}}}$;
-
– if $\pi ({A_{\text{p-ROFN}\hspace{2.5pt}}})<\pi ({B_{\text{p-ROFN}\hspace{2.5pt}}})$, hence we have ${A_{\text{p-ROFN}\hspace{2.5pt}}}{\succ _{\mathit{PDG}}}{B_{\text{p-ROFN}\hspace{2.5pt}}}$.
In view of this setting, Peng
et al. (
2018) demonstrated that the above-defined ranking technique
$S{c_{\mathit{PDG}}}$ is monotonically increasing according to
${\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}}$ and monotonically decreasing according to
${\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}}$.
Furthermore, Peng
et al. (
2018) indicated that
-
• $S{c_{\mathit{PDG}}}({A_{\text{p-ROFN}\hspace{2.5pt}}})=-1$ if and only if ${A_{\text{p-ROFN}\hspace{2.5pt}}}=({\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}}=0,{\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}}=1)$;
-
• $S{c_{\mathit{PDG}}}({A_{\text{p-ROFN}\hspace{2.5pt}}})=1$ if and only if ${A_{\text{p-ROFN}\hspace{2.5pt}}}=({\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}}=1,{\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}}=0)$.
Keeping the relation given in Definition
7 in the mind, Peng
et al. (
2018) showed that:
Theorem 1 (See Peng et al., 2018).
For any two p-ROFN
s ${A_{\text{p-ROFN}\hspace{2.5pt}}}$ and ${B_{\text{p-ROFN}\hspace{2.5pt}}}$, Peng et al.’s (
2018)
ranking order satisfies
We are now in a position to provide a brief summary of advantages and disadvantages of the ranking techniques described above:
-
• Following from the non-algorithmic techniques of Yager (
2017) and Wei
et al. (
2018) given respectively by (
8) and (
9), we easily find that Yager’s (
2017) and Wei
et al.’s (
2018) score functions cannot differentiate many p-ROFNs in some situations. For instance, in the case where
$q=2$ together with
${A_{\text{p-ROFN}\hspace{2.5pt}}}=(0.1,0.3)$ and
${B_{\text{p-ROFN}\hspace{2.5pt}}}=(0.2,\sqrt{0.12})$, we obtain that
$S{c_{Y}}({A_{\text{p-ROFN}\hspace{2.5pt}}})=S{c_{Y}}({B_{\text{p-ROFN}\hspace{2.5pt}}})=-0.08$ and
$S{c_{W}}({A_{\text{p-ROFN}\hspace{2.5pt}}})=S{c_{W}}({B_{\text{p-ROFN}\hspace{2.5pt}}})=-0.08$ which are not reasonable.
-
• Both Liu and Wang’s (
2018) and Peng
et al.’s (
2018) techniques are algorithmic in nature and require more computations than a non-algorithmic technique. More precisely, the information of p-ROFN may be lost when Liu and Wang’s (
2018) technique is implemented and this is due to the fact that it does not consider the influence of abstention. Moreover, from the curve function
$f(x)=\frac{\exp (x)}{\exp (x)+1}$ in Peng
et al.’s (
2018) technique, we easily find that it causes more complexity in the evaluation of the score function.
On the basis of the above-mentioned deficiencies of the existing techniques, we still believe that an effective score function in the form of non-algorithmic ranking technique should be constructed by considering the impact of membership and non-membership degrees together with the hesitation information.
3.3 Innovative Non-Algorithmic Ranking Technique
Definition 11.
For any p-ROFN ${A_{\textit{p-ROFN}\hspace{2.5pt}}}$, the innovative score function is defined by
or equivalently
where $0<\lambda <1$.
It is interesting to note that the score function
$S{c_{F}}$ might be re-formulated as
where
$0<\lambda <1$.
Remark 1.
In view of relation ( 16), the parameter λ corresponding to ${\mu _{{A_{\textit{p-ROFN}\hspace{2.5pt}}}}^{p}}$ and $(1-{\nu _{{A_{\textit{p-ROFN}\hspace{2.5pt}}}}^{p}})$ represents the attitudinal characters of $S{c_{F}}$. That is, whenever λ turns out to be larger from 0
to 1
, then the term ${\mu _{{A_{\textit{p-ROFN}\hspace{2.5pt}}}}^{p}}$ gets less attention, and conversely, the term $(1-{\nu _{{A_{\textit{p-ROFN}\hspace{2.5pt}}}}^{p}})$ gets more attention.
From another point of view, the score function $S{c_{F}}$ gives the average of hesitation degree between membership degree ${\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}$ and non-membership degree ${\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}$. In this regard, the different values of λ indicate the integration of subjective and objective decision making information. These findings are quite straightforward by using the insights gained from the next section.
However, such a consideration in defining a parametrized score function is quite common and reasonable, and it can be found in Zhang
et al. (
2018,
2019), Garg (
2016) and so on.
The formula (
16) is nothing else, except for the certainty degree of
${A_{\text{p-ROFN}\hspace{2.5pt}}}$ which is denoted by the interval
$[{\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}},1-{\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}]$. Needless to say that this interval follows from the fact that
${\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}+{\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}\leqslant 1$ which results in
${\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}\leqslant 1-{\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}$.
It is also of some interest to note that the above-introduced innovative score function $S{c_{F}}$ inherits the terms of membership degree ${\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}}$, non-membership degree ${\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}}$ and hesitation information $\pi ({A_{\text{p-ROFN}\hspace{2.5pt}}})$.
It is easily seen from Definition
11 that
-
• $S{c_{F}}({A_{\text{p-ROFN}\hspace{2.5pt}}})=0$ if and only if ${A_{\text{p-ROFN}\hspace{2.5pt}}}=({\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}}=0,{\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}}=1)$;
-
• $S{c_{F}}({A_{\text{p-ROFN}\hspace{2.5pt}}})=1$ if and only if ${A_{\text{p-ROFN}\hspace{2.5pt}}}=({\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}}=1,{\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}}=0)$.
Let us now investigate a number of other properties of innovative score function $S{c_{F}}$ which are given below.
Theorem 2.
For any p-ROFN
${A_{\text{p-ROFN}\hspace{2.5pt}}}$, the innovative score function $S{c_{F}}({A_{\text{p-ROFN}\hspace{2.5pt}}})$ given by (
14)
belongs to the interval $[{\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}},1-{\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}})$.
Proof.
For any p-ROFN
${A_{\text{p-ROFN}\hspace{2.5pt}}}$, it holds that
${\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}+{\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}\leqslant 1$ or
$1-{\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}-{\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}\geqslant 0$. If we consider
$0<\lambda <1$, then
which implies that
$S{c_{F}}({A_{\text{p-ROFN}\hspace{2.5pt}}})\geqslant {\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}$.
On the other hand, from the fact that
$0<\lambda <1$, we get
Therefore, we result in
${\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}\leqslant S{c_{F}}({A_{\text{p-ROFN}\hspace{2.5pt}}})<1-{\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}$. □
Corollary 1.
For any p-ROFN
${A_{\text{p-ROFN}\hspace{2.5pt}}}$, the innovative score function $S{c_{F}}({A_{\text{p-ROFN}\hspace{2.5pt}}})$ given by (
14)
satisfies $S{c_{F}}({A_{\text{p-ROFN}\hspace{2.5pt}}})\in [0,1)$.
Proof.
The proof comes from the fact that $0\leqslant {\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}\leqslant S{c_{F}}({A_{\text{p-ROFN}\hspace{2.5pt}}})<1-{\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}\leqslant 1$. □
Theorem 3.
For any two p-ROFN
s ${A_{\text{p-ROFN}\hspace{2.5pt}}}$ and ${B_{\text{p-ROFN}\hspace{2.5pt}}}$, we conclude that
Proof.
(Necessity) The relation
${A_{\text{p-ROFN}\hspace{2.5pt}}}{\preceq _{Y}}{B_{\text{p-ROFN}\hspace{2.5pt}}}$ holds true if and only if
From the relation (
18), we get that
${\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}\leqslant {\mu _{{B_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}$ for any
$p\in [1,\infty )$, and consequently,
$(1-\lambda ){\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}\leqslant (1-\lambda ){\mu _{{B_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}$ for any
$0<\lambda <1$.
From the relation (
19), it results that
${\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}\geqslant {\nu _{{B_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}$ for any
$p\in [1,\infty )$, and consequently,
$1-{\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}\leqslant 1-{\nu _{{B_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}$ or
$\lambda (1-{\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}})\leqslant \lambda (1-{\nu _{{B_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}})$ for any
$0<\lambda <1$.
Taking all the above relations into account, we get
which implies that
(Sufficiency) Let
Now, it is obvious that
$S{c_{F}}({B_{\text{p-ROFN}\hspace{2.5pt}}})-S{c_{F}}({A_{\text{p-ROFN}\hspace{2.5pt}}})\geqslant 0$ if
$(1-\lambda )({\mu _{{B_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}-{\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}})+\lambda ({\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}-{\nu _{{B_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}})\geqslant 0$ for any
$0<\lambda <1$, that is,
for any
$p\in [1,\infty )$. Consequently, we deduce that
${\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}}\leqslant {\mu _{{B_{\text{p-ROFN}\hspace{2.5pt}}}}}$ and
${\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}}\geqslant {\nu _{{B_{\text{p-ROFN}\hspace{2.5pt}}}}}$ which implies that
${A_{\text{p-ROFN}\hspace{2.5pt}}}{\preceq _{Y}}{B_{\text{p-ROFN}\hspace{2.5pt}}}$. □
Theorem 4.
For any p-ROFN
${A_{\text{p-ROFN}\hspace{2.5pt}}}=({\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}},{\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}})$ and its corresponding complement ${A_{\text{p-ROFN}\hspace{2.5pt}}^{c}}=({\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}},{\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}})$, we deduce that
Proof.
From definition of the innovative score function
$S{c_{F}}$, we have
Now, from the relation
${\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}+{\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}\leqslant 1$, we get that
□
As a result, it is interesting to remark that in the case where
$\lambda =\frac{1}{2}$, it holds that
Theorem 5.
For any p-ROFN
${A_{\text{p-ROFN}\hspace{2.5pt}}}$, the innovative score function $S{c_{F}}({A_{\text{p-ROFN}\hspace{2.5pt}}})$ given by (
14)
is a monotonically increasing function of ${\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}}$ and a monotonically decreasing function of ${\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}}$.
Proof.
The proof is immediately apparent from calculating the first partial derivatives of
$S{c_{F}}({A_{\text{p-ROFN}\hspace{2.5pt}}})={\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}+\lambda (1-{\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}-{\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}})$ according to
${\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}}$ and
${\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}}$ where
for any
$0<\lambda <1$. □
Theorem 6.
For any p-ROFN ${A_{\text{p-ROFN}\hspace{2.5pt}}}$, the innovative score function $S{c_{F}}({A_{\text{p-ROFN}\hspace{2.5pt}}})={\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}+\lambda (1-{\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}-{\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}})$ is anincreasing function of λ, where $0<\lambda <1$.
Proof.
The result now follows from the fact that
□
3.4 Comparison of the Proposed and Existing Score Functions
In this part of the contribution, we re-consider once again the comparison results given in Peng
et al. (
2018) together with the corresponding results of the proposed score function
$S{c_{F}}$. All the results are summarized in Tables
1–
6.
Needless to say that what we expect from Remark
1 is that by increasing the value of
λ from 0 to 1, the precedence of that p-ROFN, which has the larger membership degree
${\mu _{{\ast _{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}$, reduces, and further, the precedence of that p-ROFN, which has the smaller non-membership degree
${\nu _{{\ast _{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}$ (or has the larger degree
$(1-{\nu _{{\ast _{\text{p-ROFN}\hspace{2.5pt}}}}^{p}})$), enlarges. This is exactly what we observe from Tables
2,
4, and
6. For instance, if we consider the p-ROFNs in Tables
1 and
2 in the form of
$A:={A_{\text{p-ROFN}\hspace{2.5pt}}}=(\sqrt{0.22},0.7)$ and
$B:={B_{\text{p-ROFN}\hspace{2.5pt}}}=(0.3,0.6)$, we clearly see that
${\mu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}>{\mu _{{B_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}}$ and
$(1-{\nu _{{A_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}})<(1-{\nu _{{B_{\text{p-ROFN}\hspace{2.5pt}}}}^{p}})$. Therefore, we expect that by increasing the value of
λ from 0 to 1, the precedence of
${A_{\text{p-ROFN}\hspace{2.5pt}}}$ decreases, and moreover, the precedence of
${B_{\text{p-ROFN}\hspace{2.5pt}}}$ increases. Indeed, this is what we see in Table
2. The other results in Tables
3–
6 are consistent with the above-mentioned fact.
Table 1
The ranking results of the existing score functions.
p-ROFN |
p |
$S{c_{Y}}$ (Yager, 2017) |
Ranking |
$S{c_{W}}$ (Wei et al., 2018) |
Ranking |
$\begin{array}[t]{r@{\hskip4.0pt}c@{\hskip4.0pt}l}A& :=& {A_{\text{p-ROFN}}}\\ {} & =& (\sqrt{0.22},0.7)\end{array}$ |
$p=1$ |
$S{c_{Y}}(A)=-0.2310$ |
|
$S{c_{W}}(A)=0.3845$ |
|
$\begin{array}[t]{r@{\hskip4.0pt}c@{\hskip4.0pt}l}B& :=& {B_{\text{p-ROFN}}}\\ {} & =& (0.3,0.6)\end{array}$ |
|
$S{c_{Y}}(B)=-0.3000$ |
$A>B$ |
$S{c_{W}}(B)=0.3500$ |
$A>B$ |
|
$p=2$ |
$S{c_{Y}}(A)=-0.2700$ |
|
$S{c_{W}}(A)=0.3650$ |
|
|
|
$S{c_{Y}}(B)=-0.2700$ |
$A=B$ |
$S{c_{W}}(B)=0.3650$ |
$A=B$ |
|
$p=3$ |
$S{c_{Y}}(A)=-0.2398$ |
|
$S{c_{W}}(A)=0.3801$ |
|
|
|
$S{c_{Y}}(B)=-0.1890$ |
$A<B$ |
$S{c_{W}}(B)=0.4055$ |
$A<B$ |
$S{c_{\mathit{LW}}}$ (Liu and Wang, 2018) |
Ranking |
$S{c_{\mathit{PDG}}}$ (Peng et al., 2018) |
Ranking |
$S{c_{\mathit{LW}}}(A)=-0.2310$ |
|
$S{c_{\mathit{PDG}}}(A)=-0.2212$ |
|
$S{c_{\mathit{LW}}}(B)=-0.3000$ |
$A>B$ |
$S{c_{\mathit{PDG}}}(B)=-0.3074$ |
$A>B$ |
$S{c_{\mathit{LW}}}(A)=-0.2700$ |
(Using $Ac{c_{\mathit{LW}}}$) |
$S{c_{\mathit{PDG}}}(A)=-0.2895$ |
|
$S{c_{\mathit{LW}}}(B)=-0.2700$ |
$A>B$ |
$S{c_{\mathit{PDG}}}(B)=-0.2247$ |
$A<B$ |
$S{c_{\mathit{LW}}}(B)=-0.2398$ |
|
$S{c_{\mathit{PDG}}}(A)=-0.2729$ |
|
$S{c_{\mathit{LW}}}(B)=-0.1890$ |
$A<B$ |
$S{c_{\mathit{PDG}}}(B)=-0.3069$ |
$A>B$ |
Table 2
The ranking results of the proposed score function $S{c_{F}}$ corresponding to the values of λ from 0 to 1.
p-ROFN |
p |
λ |
$S{c_{F}}$ |
Ranking |
$\begin{array}[t]{r@{\hskip4.0pt}c@{\hskip4.0pt}l}A& :=& {A_{\text{p-ROFN}}}\\ {} & =& (\sqrt{0.22},0.7)\end{array}$ |
$p=1$ |
$\lambda =0$ |
$S{c_{F}}(A)=\ast $ |
|
$\begin{array}[t]{r}B:={B_{\text{p-ROFN}}}\\ {} =(0.3,0.6)\end{array}$ |
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.2200$ |
|
|
|
|
$S{c_{F}}(B)=0.0900$ |
$A>B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.1032$ |
|
|
|
|
$S{c_{F}}(B)=0.0270$ |
$A>B$ |
|
$p=1$ |
$\lambda =0.1$ |
$S{c_{F}}(A)=\ast $ |
|
|
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.2490$ |
|
|
|
|
$S{c_{F}}(B)=0.1450$ |
$A>B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.1586$ |
|
|
|
|
$S{c_{F}}(B)=0.1027$ |
$A>B$ |
|
$p=1$ |
$\lambda =0.2$ |
$S{c_{F}}(A)=\ast $ |
|
|
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.2780$ |
|
|
|
|
$S{c_{F}}(B)=0.2000$ |
$A>B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.2140$ |
|
|
|
|
$S{c_{F}}(B)=0.1784$ |
$A>B$ |
|
$p=1$ |
$\lambda =0.3$ |
$S{c_{F}}(A)=\ast $ |
|
|
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.3070$ |
|
|
|
|
$S{c_{F}}(B)=0.2550$ |
$A>B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.2693$ |
|
|
|
|
$S{c_{F}}(B)=0.2541$ |
$A>B$ |
|
$p=1$ |
$\lambda =0.4$ |
$S{c_{F}}(A)=\ast $ |
|
|
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.3360$ |
|
|
|
|
$S{c_{F}}(B)=0.3100$ |
$A>B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.3247$ |
|
|
|
|
$S{c_{F}}(B)=0.3298$ |
$A<B$ |
|
$p=1$ |
$\lambda =0.5$ |
$S{c_{F}}(A)=\ast $ |
|
|
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.3651$ |
|
|
|
|
$S{c_{F}}(B)=0.3650$ |
$A>B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.3801$ |
|
|
|
|
$S{c_{F}}(B)=0.4055$ |
$A<B$ |
|
$p=1$ |
$\lambda =0.6$ |
$S{c_{F}}(A)=\ast $ |
|
|
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.3940$ |
|
|
|
|
$S{c_{F}}(B)=0.4200$ |
$A<B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.4355$ |
|
|
|
|
$S{c_{F}}(B)=0.4812$ |
$A<B$ |
|
$p=1$ |
$\lambda =0.7$ |
$S{c_{F}}(A)=\ast $ |
|
|
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.4230$ |
|
|
|
|
$S{c_{F}}(B)=0.4750$ |
$A<B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.4909$ |
|
|
|
|
$S{c_{F}}(B)=0.5569$ |
$A<B$ |
|
$p=1$ |
$\lambda =0.8$ |
$S{c_{F}}(A)=\ast $ |
|
|
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.4520$ |
|
|
|
|
$S{c_{F}}(B)=0.5300$ |
$A<B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.5462$ |
|
|
|
|
$S{c_{F}}(B)=0.6326$ |
$A<B$ |
|
$p=1$ |
$\lambda =0.9$ |
$S{c_{F}}(A)=\ast $ |
|
|
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.4810$ |
|
|
|
|
$S{c_{F}}(B)=0.5850$ |
$A<B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.6016$ |
|
|
|
|
$S{c_{F}}(B)=0.7083$ |
$A<B$ |
|
$p=1$ |
$\lambda =1$ |
$S{c_{F}}(A)=\ast $ |
|
|
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.5100$ |
|
|
|
|
$S{c_{F}}(B)=0.6400$ |
$A<B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.6570$ |
|
|
|
|
$S{c_{F}}(B)=0.7840$ |
$A<B$ |
Table 3
The ranking results of the existing score functions.
p-ROFN |
p |
$S{c_{Y}}$ (Yager, 2017) |
Ranking |
$S{c_{W}}$ (Wei et al., 2018) |
Ranking |
$\begin{array}[t]{r@{\hskip4.0pt}c@{\hskip4.0pt}l}A& :=& {A_{\text{p-ROFN}}}\\ {} & =& (0.6,0.3)\end{array}$ |
$p=1$ |
$S{c_{Y}}(A)=0.3000$ |
|
$S{c_{W}}(A)=0.6500$ |
|
$\begin{array}[t]{r@{\hskip4.0pt}c@{\hskip4.0pt}l}B& :=& {B_{\text{p-ROFN}}}\\ {} & =& (0.5,0.2)\end{array}$ |
|
$S{c_{Y}}(B)=0.3000$ |
$A=B$ |
$S{c_{W}}(B)=0.6500$ |
$\begin{array}[t]{l}(\text{Using}Ac{c_{\mathit{LW}}})\\ {} A=B\end{array}$ |
|
$p=2$ |
$S{c_{Y}}(A)=0.2700$ |
|
$S{c_{W}}(A)=0.6350$ |
|
|
|
$S{c_{Y}}(B)=0.2100$ |
$A>B$ |
$S{c_{W}}(B)=0.6050$ |
$A>B$ |
|
$p=3$ |
$S{c_{Y}}(A)=0.1890$ |
|
$S{c_{W}}(A)=0.5945$ |
|
|
|
$S{c_{Y}}(B)=0.1170$ |
$A>B$ |
$S{c_{W}}(B)=0.5585$ |
$A>B$ |
$S{c_{\mathit{LW}}}$ (Liu and Wang, 2018) |
Ranking |
$S{c_{\mathit{PDG}}}$ (Peng et al., 2018) |
Ranking |
$S{c_{\mathit{LW}}}(A)=0.3000$ |
|
$S{c_{\mathit{PDG}}}(A)=0.3074$ |
|
$S{c_{\mathit{LW}}}(B)=0.3000$ |
$A>B$ |
$S{c_{\mathit{PDG}}}(B)=0.3233$ |
$A<B$ |
$S{c_{\mathit{LW}}}(A)=0.2700$ |
(Using $Ac{c_{\mathit{LW}}}$) |
$S{c_{\mathit{PDG}}}(A)=0.3069$ |
|
$S{c_{\mathit{LW}}}(B)=0.2100$ |
$A>B$ |
$S{c_{\mathit{PDG}}}(B)=0.2471$ |
$A>B$ |
$S{c_{\mathit{LW}}}(B)=0.1890$ |
|
$S{c_{\mathit{PDG}}}(A)=0.2247$ |
|
$S{c_{\mathit{LW}}}(B)=0.1170$ |
$A>B$ |
$S{c_{\mathit{PDG}}}(B)=0.1423$ |
$A>B$ |
Table 4
The ranking results of the proposed score function $S{c_{F}}$ corresponding to the values of λ from 0 to 1.
p-ROFN |
p |
λ |
$S{c_{F}}$ |
Ranking |
$\begin{array}[t]{r@{\hskip4.0pt}c@{\hskip4.0pt}l}A& :=& {A_{\text{p-ROFN}}}\\ {} & =& (0.6,0.3)\end{array}$ |
$p=1$ |
$\lambda =0$ |
$S{c_{F}}(A)=\ast $ |
|
$\begin{array}[t]{r@{\hskip4.0pt}c@{\hskip4.0pt}l}B& :=& {B_{\text{p-ROFN}}}\\ {} & =& (0.5,0.2)\end{array}$ |
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.3600$ |
|
|
|
|
$S{c_{F}}(B)=0.2500$ |
$A>B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.2160$ |
|
|
|
|
$S{c_{F}}(B)=0.1250$ |
$A>B$ |
|
$p=1$ |
$\lambda =0.1$ |
$S{c_{F}}(A)=\ast $ |
|
|
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.4150$ |
|
|
|
|
$S{c_{F}}(B)=0.3210$ |
$A>B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.2917$ |
|
|
|
|
$S{c_{F}}(B)=0.2117$ |
$A>B$ |
|
$p=1$ |
$\lambda =0.2$ |
$S{c_{F}}(A)=\ast $ |
|
|
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.4700$ |
|
|
|
|
$S{c_{F}}(B)=0.3920$ |
$A>B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.3674$ |
|
|
|
|
$S{c_{F}}(B)=0.2984$ |
$A>B$ |
|
$p=1$ |
$\lambda =0.3$ |
$S{c_{F}}(A)=\ast $ |
|
|
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.5250$ |
|
|
|
|
$S{c_{F}}(B)=0.4630$ |
$A>B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.4431$ |
|
|
|
|
$S{c_{F}}(B)=0.3851$ |
$A>B$ |
|
$p=1$ |
$\lambda =0.4$ |
$S{c_{F}}(A)=\ast $ |
|
|
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.5800$ |
|
|
|
|
$S{c_{F}}(B)=0.5340$ |
$A>B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.5188$ |
|
|
|
|
$S{c_{F}}(B)=0.4718$ |
$A>B$ |
|
$p=1$ |
$\lambda =0.5$ |
$S{c_{F}}(A)=\ast $ |
|
|
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.6350$ |
|
|
|
|
$S{c_{F}}(B)=0.6050$ |
$A>B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.5945$ |
|
|
|
|
$S{c_{F}}(B)=0.5585$ |
$A>B$ |
|
$p=1$ |
$\lambda =0.6$ |
$S{c_{F}}(A)=\ast $ |
|
|
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.6900$ |
|
|
|
|
$S{c_{F}}(B)=0.6760$ |
$A>B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.6702$ |
|
|
|
|
$S{c_{F}}(B)=0.6452$ |
$A>B$ |
|
$p=1$ |
$\lambda =0.7$ |
$S{c_{F}}(A)=\ast $ |
|
|
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.7450$ |
|
|
|
|
$S{c_{F}}(B)=0.7470$ |
$A<B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.7459$ |
|
|
|
|
$S{c_{F}}(B)=0.7319$ |
$A>B$ |
|
$p=1$ |
$\lambda =0.8$ |
$S{c_{F}}(A)=\ast $ |
|
|
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.8000$ |
|
|
|
|
$S{c_{F}}(B)=0.8180$ |
$A<B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.8216$ |
|
|
|
|
$S{c_{F}}(B)=0.8186$ |
$A>B$ |
|
$p=1$ |
$\lambda =0.9$ |
$S{c_{F}}(A)=\ast $ |
|
|
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.8550$ |
|
|
|
|
$S{c_{F}}(B)=0.8890$ |
$A<B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.8973$ |
|
|
|
|
$S{c_{F}}(B)=0.9053$ |
$A>B$ |
|
$p=1$ |
$\lambda =1$ |
$S{c_{F}}(A)=\ast $ |
|
|
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.9100$ |
|
|
|
|
$S{c_{F}}(B)=0.9600$ |
$A<B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.9730$ |
|
|
|
|
$S{c_{F}}(B)=0.9920$ |
$A<B$ |
Table 5
The ranking results of the existing score functions.
$\text{p-ROFN}$ |
p |
$S{c_{Y}}$ (Yager, 2017) |
Ranking |
$S{c_{W}}$ (Wei et al., 2018) |
Ranking |
$\begin{array}[t]{r@{\hskip4.0pt}c@{\hskip4.0pt}l}A& :=& {A_{\text{p-ROFN}}}\\ {} & =& (0.4,0.1)\end{array}$ |
$p=1$ |
$S{c_{Y}}(A)=0.3000$ |
|
$S{c_{W}}(A)=0.6500$ |
|
$\begin{array}[t]{r@{\hskip4.0pt}c@{\hskip4.0pt}l}B& :=& {B_{\text{p-ROFN}}}\\ {} & =& (\sqrt{0.1501},0.01)\end{array}$ |
|
$S{c_{Y}}(A)=0.3774$ |
$A<B$ |
$S{c_{W}}(A)=0.6887$ |
$A<B$ |
|
$p=2$ |
$S{c_{Y}}(A)=0.1500$ |
|
$S{c_{W}}(A)=0.5750$ |
|
|
|
$S{c_{Y}}(B)=0.1500$ |
$A=B$ |
$S{c_{W}}(B)=0.5750$ |
$A=B$ |
|
$p=3$ |
$S{c_{Y}}(A)=0.0630$ |
|
$S{c_{W}}(A)=0.5315$ |
|
|
|
$S{c_{Y}}(B)=0.0582$ |
$A>B$ |
$S{c_{W}}(B)=0.5291$ |
$A>B$ |
$S{c_{\mathit{LW}}}$ (Liu and Wang, 2018) |
Ranking |
$S{c_{\mathit{PDG}}}$ (Peng et al., 2018) |
Ranking |
$S{c_{\mathit{LW}}}(A)=0.3000$ |
|
$S{c_{\mathit{PDG}}}(A)=0.3372$ |
|
$S{c_{\mathit{LW}}}(B)=0.3774$ |
$A<B$ |
$S{c_{\mathit{PDG}}}(B)=0.4336$ |
$A<B$ |
$S{c_{\mathit{LW}}}(A)=0.1500$ |
(Using $Ac{c_{\mathit{LW}}}$) |
$S{c_{\mathit{PDG}}}(A)=0.1811$ |
|
$S{c_{\mathit{LW}}}(B)=0.1500$ |
$A>B$ |
$S{c_{\mathit{PDG}}}(B)=0.1818$ |
$A<B$ |
$S{c_{\mathit{LW}}}(A)=0.0630$ |
(Using $Ac{c_{\mathit{LW}}}$) |
$S{c_{\mathit{PDG}}}(A)=0.0777$ |
|
$S{c_{\mathit{LW}}}(B)=0.0582$ |
$A>B$ |
$S{c_{\mathit{PDG}}}(B)=0.0718$ |
$A>B$ |
Table 6
The ranking results of the proposed score function $S{c_{F}}$ corresponding to the values of λ from 0 to 1.
p-ROFN |
p |
λ |
$S{c_{F}}$ |
Ranking |
$\begin{array}[t]{r@{\hskip4.0pt}c@{\hskip4.0pt}l}A& :=& {A_{\text{p-ROFN}}}\\ {} & =& (0.4,0.1)\end{array}$ |
$p=1$ |
$\lambda =0$ |
$S{c_{F}}(A)=\ast $ |
|
$\begin{array}[t]{r@{\hskip4.0pt}c@{\hskip4.0pt}l}B& :=& {B_{\text{p-ROFN}}}\\ {} & =& (\sqrt{0.1501},0.01)\end{array}$ |
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.1600$ |
|
|
|
|
$S{c_{F}}(B)=0.1501$ |
$A>B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.0640$ |
|
|
|
|
$S{c_{F}}(B)=0.0582$ |
$A>B$ |
|
$p=1$ |
$\lambda =0.1$ |
$S{c_{F}}(A)=\ast $ |
|
|
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.2430$ |
|
|
|
|
$S{c_{F}}(B)=0.2351$ |
$A>B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.1575$ |
|
|
|
|
$S{c_{F}}(B)=0.1523$ |
$A>B$ |
|
$p=1$ |
$\lambda =0.2$ |
$S{c_{F}}(A)=\ast $ |
|
|
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.3260$ |
|
|
|
|
$S{c_{F}}(B)=0.3201$ |
$A>B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.2510$ |
|
|
|
|
$S{c_{F}}(B)=0.2465$ |
$A>B$ |
|
$p=1$ |
$\lambda =0.3$ |
$S{c_{F}}(A)=\ast $ |
|
|
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.4090$ |
|
|
|
|
$S{c_{F}}(B)=0.4050$ |
$A>B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.3445$ |
|
|
|
|
$S{c_{F}}(B)=0.3407$ |
$A>B$ |
|
$p=1$ |
$\lambda =0.4$ |
$S{c_{F}}(A)=\ast $ |
|
|
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.4920$ |
|
|
|
|
$S{c_{F}}(B)=0.4900$ |
$A>B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.4380$ |
|
|
|
|
$S{c_{F}}(B)=0.4349$ |
$A<B$ |
|
$p=1$ |
$\lambda =0.5$ |
$S{c_{F}}(A)=\ast $ |
|
|
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.5751$ |
|
|
|
|
$S{c_{F}}(B)=0.5750$ |
$A>B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.5315$ |
|
|
|
|
$S{c_{F}}(B)=0.5291$ |
$A>B$ |
|
$p=1$ |
$\lambda =0.6$ |
$S{c_{F}}(A)=\ast $ |
|
|
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.6580$ |
|
|
|
|
$S{c_{F}}(B)=0.6600$ |
$A<B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.6250$ |
|
|
|
|
$S{c_{F}}(B)=0.6233$ |
$A>B$ |
|
$p=1$ |
$\lambda =0.7$ |
$S{c_{F}}(A)=\ast $ |
|
|
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.7410$ |
|
|
|
|
$S{c_{F}}(B)=0.7450$ |
$A<B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.7185$ |
|
|
|
|
$S{c_{F}}(B)=0.7174$ |
$A>B$ |
|
$p=1$ |
$\lambda =0.8$ |
$S{c_{F}}(A)=\ast $ |
|
|
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.8240$ |
|
|
|
|
$S{c_{F}}(B)=0.8299$ |
$A<B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.8120$ |
|
|
|
|
$S{c_{F}}(B)=0.8116$ |
$A>B$ |
|
$p=1$ |
$\lambda =0.9$ |
$S{c_{F}}(A)=\ast $ |
|
|
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.9070$ |
|
|
|
|
$S{c_{F}}(B)=0.9149$ |
$A<B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.9055$ |
|
|
|
|
$S{c_{F}}(B)=0.9058$ |
$A<B$ |
|
$p=1$ |
$\lambda =1$ |
$S{c_{F}}(A)=\ast $ |
|
|
|
|
$S{c_{F}}(B)=\ast $ |
∗ |
|
$p=2$ |
|
$S{c_{F}}(A)=0.9900$ |
|
|
|
|
$S{c_{F}}(B)=0.9999$ |
$A<B$ |
|
$p=3$ |
|
$S{c_{F}}(A)=0.9990$ |
|
|
|
|
$S{c_{F}}(B)=1.0000$ |
$A<B$ |
The findings from Tables
1–
6 are summarized below:
-
• The first collection of p-ROFNs demonstrates that
${A_{\text{p-ROFN}\hspace{2.5pt}}}=(\sqrt{0.22},0.7)$ is not a
$p(=1)$-ROFN (because
$\sqrt{0.22}+0.7\leqslant ̸1$ which means that
${A_{\text{p-ROFN}\hspace{2.5pt}}}=(\sqrt{0.22},0.7)$ is not an IFN), and clearly, no score function should return value in this case. This is while, all the existing score-based comparison techniques of
$S{c_{Y}}$ (Yager,
2017),
$S{c_{W}}$ (Wei
et al.,
2018),
$S{c_{\mathit{LW}}}$ (Liu and Wang,
2018) and
$S{c_{\mathit{PDG}}}$ (Peng
et al.,
2018) return some values which are not obviously reasonable. Actually, such a case verifies the superiority of the proposed score function over the existing ones, and the proposed score function can effectively solve the deficiencies of all the above-mentioned score-based comparison techniques.
-
• In spite of the existing score-based comparison techniques
$S{c_{Y}}$ (Yager,
2017),
$S{c_{W}}$ (Wei
et al.,
2018),
$S{c_{\mathit{LW}}}$ (Liu and Wang,
2018) and
$S{c_{\mathit{PDG}}}$ (Peng
et al.,
2018), the proposed score function
$S{c_{F}}$ enables the decision maker to achieve greater insight and perform fine tuning of the selection process by choosing an appropriate value for the attitudinal character
$\lambda \in [0,1]$.
In summary, the proposed score function
$S{c_{F}}$ is more reliable and preferable than the other existing score functions where they are unable to discriminate reasonably between the pairs of p-ROFNs in some situations.
4 MCDM Method Based on the Score Function of p-ROFNs
Multiple criteria decision making (MCDM) is an active research area, and there exist a large number of researches ( Farhadinia,
2014,
2016a,
2016b; Farhadinia and Herrera-Viedma,
2018; Farhadinia and Xu,
2017) in which the decision maker is going to provide a list of alternatives ranking in accordance with a given set of criteria.
In this part of the manuscript, we are facing a MCDM problem in which decision making is made by the use of a ranking procedure of p-ROFNs.
Suppose that
$X=\{{x_{1}},{x_{2}},\dots ,{x_{m}}\}$ describes a set of alternatives, and the set of criteria is in the form of
$C=\{{c_{1}},{c_{2}},\dots ,{c_{n}}\}$. Furthermore, we denote the associated weight vector of criteria by
$w=({w_{1}},{w_{2}},\dots ,{w_{n}})$ such that
$0\leqslant {w_{j}}\leqslant 1$ $(j=1,2,\dots ,n)$ with the property
${\textstyle\sum _{j=1}^{n}}{w_{j}}=1$. Assume that a decision maker group is organized to evaluate the characteristics of each alternative with respect to each criterion with the help of p-ROFN concept. In this regard, the p-rung orthopair fuzzy decision matrix will be
where
$0\leqslant {\mu _{{D_{\text{p-ROFN}\hspace{2.5pt}}^{(ij)}}}^{p}}+{\nu _{{D_{\text{p-ROFN}\hspace{2.5pt}}^{(ij)}}}^{p}}\leqslant 1$ for any
$p\in [1,\infty )$.
Now, with the help of Vlsekriterijumska Optimizacija I Kompromisno Resenje (VIKOR) technique (Liou
et al.,
2011) and Technique for Order of Preference by Similarity to the Ideal Solution (TOPSIS) (Lai
et al.,
1994) together with implementing the innovative score function of p-ROFNs, we will be able to describe a MCDM algorithm as the following:
Determine the best and the worst values with respect to all criteria which are denoted respectively by the p-ROFNs ${f_{j}^{+}}$ and ${f_{j}^{-}}$:
(For the benefit criteria)
Construct the score matrix
${S_{D}}={[Sc({D_{\text{p-ROFN}\hspace{2.5pt}}^{(ij)}})]_{m\times n}}$, and define the
n-array vectors:
Construct the following normalized nearest-best and farthest-worst solution matrices:
where
${S_{D}^{\mathit{near}}}(ij),{S_{D}^{\mathit{far}}}(ij)\in [0,1]$, and moreover,
$\operatorname{MIN}(x)=\min \{x,\frac{1}{x}\}$.
Keeping the above normalized nearest-best and farthest-worst solution matrices together with the weighting vector $w=({w_{1}},{w_{2}},\dots ,{w_{n}})$ in mind, we are able to calculate the group utility and the individual regret for each alternative ${x_{i}}$ $(i=1,2,\dots ,m)$ in both best and worst cases:
Construct the normalized nearest-best group utility and nearest-best individual regret values, respectively, of alternative
${x_{i}}$ $(i=1,2,\dots ,m)$:
where
and construct the normalized farthest-worst group utility and farthest-worst individual regret values, respectively, of alternative
${x_{i}}$ $(i=1,2,\dots ,m)$:
where
in which
$\operatorname{MIN}(x)=\min \{x,\frac{1}{x}\}$.
Construct the nearest-best and farthest-worst score values of alternative
${x_{i}}$ $(i=1,2,\dots ,m)$, respectively, as follows:
where
${\overline{\overline{C}}_{i}},{\underline{\underline{C}}_{i}}\in [0,1]$ for any
$1\leqslant i\leqslant m$, and
α indicates the strategy of
maximum group utility while
$(1-\alpha )$ indicates the strategy of
minimum individual regret. Here, we suppose that
$\alpha =0.5$.
Compute the relative closeness degree of each alternative
${x_{i}}$ $(i=1,2,\dots ,m)$ in the form of
where the smaller value of the relative closeness degree indicates the better preference order of alternative
${x_{i}}$.
Before going more into detail, we summarize again the superiorities of the above-mentioned MCDM algorithm compared to the existing approaches being based independently on VIKOR or TOPSIS techniques:
-
* The proposed MCDM algorithm implements the innovative score function $S{c_{F}}$ of p-ROFNs whose results are more reasonable than that of existing ones;
-
* By employing the new transformation function MIN, we are able to prevent violence of division by zero which occurs in the traditional version of VIKOR techniques;
-
* The proposed MCDM algorithm ranks the alternatives based on the combination of VIKOR and TOPSIS outputs.
Example 1 (Adopted from Chen et al., 2016).
We are going to investigate here a MCDM problem that deals with the supplier selection in supply chain management with p-ROFN information, in which five alternatives of suppliers ${x_{i}}$ ( $i=1,2,\dots ,5$) are assessed by the use of four benefit criteria ${c_{1}}$: Quality, ${c_{2}}$: Service, ${c_{3}}$: Delivery and ${c_{4}}$: Price. Suppose that the weight vector of criteria is $w=({w_{1}}=0.25,{w_{2}}=0.40,{w_{3}}=20,{w_{4}}=0.15)$.
We assume that the decision values are described by p-ROFNs in the form of the decision matrix:
Needless to say that the above decision matrix is not in the form of an intuitionistic fuzzy matrix as considered in Chen
et al. (
2016) because the entries
$({\mu _{{D_{\text{p-ROFN}\hspace{2.5pt}}^{(21)}}}},{\nu _{{D_{\text{p-ROFN}\hspace{2.5pt}}^{(21)}}}})$ and
$({\mu _{{D_{\text{p-ROFN}\hspace{2.5pt}}^{(41)}}}},{\nu _{{D_{\text{p-ROFN}\hspace{2.5pt}}^{(41)}}}})$ are not IFNs.
Step 1. We determine the best and the worst values in correspondence with all benefit criteria as the following:
Step 2. By keeping the score functions
$S{c_{YLW}}$,
$S{c_{\mathit{PDG}}}$ and
$S{c_{F}}$ given respectively by (
10), (
12) and (
14) into account, we are able to construct the score matrices
for
$p=2$;
for
$p=3$;
for
$p=2$;
for
$p=3$;
for
$p=2$ $(\lambda =0.5)$;
for
$p=3$ $(\lambda =0.5)$.
In order to save more space for convenient storage, we do not state here the calculation of ${S_{D-F}}={[S{c_{F}}({D_{\text{p-ROFN}\hspace{2.5pt}}^{(ij)}})]_{5\times 4}}$ for $\lambda =0,1$ (as samples of values $\lambda \in [0,1]$), and only the corresponding results will be reported in Step 5.
By the way, we obtain here:
(For
$p=2$)
(For
$p=3$)
(For
$p=2$)
(For
$p=3$)
(For
$p=2$ $(\lambda =0.5)$)
(For
$p=3$ $(\lambda =0.5)$)
Hereafter, we do not give the details of computation of p-ROFNs for $p=3$, and only the corresponding results for $p=2$ are returned.
Step 3. We construct the normalized nearest-best and farthest-worst solution matrices as follows:
for
$p=2$,
for
$p=2$,
for
$p=2$ $(\lambda =0.5)$, and
for
$p=2$,
for
$p=2$,
for
$p=2$ $(\lambda =0.5)$.
Step 4. If we keep the above normalized nearest-best and farthest-worst solution matrices together with the weighting vector $w=({w_{1}}=0.25,{w_{2}}=0.40,{w_{3}}=20,{w_{4}}=0.15)$ in mind, then we will be able to calculate the group utility and the individual regret for each alternative ${x_{i}}$ in both best and worst cases:
(Best case: for
$p=2$)
(Worst case: for
$p=2$)
In this case, we omit the calculations of ${S_{i}^{\mathit{worst}}}$ and ${R_{i}^{\mathit{worst}}}$ because they are similar to those given above.
Step 5. We construct the normalized nearest-best group utility and nearest-best individual regret values, respectively, of alternative
${x_{i}}$ as the following: (for
$p=2$)
The calculations of the normalized farthest-worst group utility values
${\underline{\underline{S}}_{\hspace{0.1667em}i}}$ and farthest-worst individual regret values
${\underline{\underline{R}}_{\hspace{0.1667em}i}}$ are omitted due to lack of space.
On the basis of Step 6 and Step 7, we can determine the relative closeness degree of each alternative ${x_{i}}$ for both cases $p=2,3$:
(For
$p=2$)
(For
$p=3$)
where the smaller value of the relative closeness degree indicates the better preference order of alternative
${x_{i}}$. In this regard, the preference orders of the alternatives are given in Table
7.
Table 7
Rankings of alternatives for different score-based MCDM techniques under p-ROFN environment.
Score function |
p |
The final ranking |
$S{c_{\mathit{YLW}}}$ (given by Yager, 2017; Wei et al., 2018; Liu and Wang, 2018) |
$p=2$ |
${x_{2}}>{x_{4}}>{x_{5}}>{x_{3}}>{x_{1}}$ |
$p=3$ |
${x_{2}}>{x_{4}}>{x_{5}}>{x_{3}}>{x_{1}}$ |
$S{c_{\mathit{PDG}}}$ (given by Peng et al.’s, 2018) |
$p=2$ |
${x_{2}}>{x_{4}}>{x_{5}}>{x_{3}}>{x_{1}}$ |
|
$p=3$ |
${x_{2}}>{x_{4}}>{x_{5}}>{x_{3}}>{x_{1}}$ |
Proposed $S{c_{F}}\hspace{2.5pt}\text{for}\hspace{2.5pt}\lambda =0$
|
$p=2$ |
${x_{2}}>{x_{4}}>{x_{5}}>{x_{3}}>{x_{1}}$ |
|
$p=3$ |
${x_{2}}>{x_{4}}>{x_{5}}>{x_{3}}>{x_{1}}$ |
Proposed $S{c_{F}}\hspace{2.5pt}\text{for}\hspace{2.5pt}\lambda =0.5$
|
$p=2$ |
${x_{2}}>{x_{4}}>{x_{5}}>{x_{3}}>{x_{1}}$ |
|
$p=3$ |
${x_{2}}>{x_{4}}>{x_{5}}>{x_{3}}>{x_{1}}$ |
Proposed $S{c_{F}}\hspace{2.5pt}\text{for}\hspace{2.5pt}\lambda =1$
|
$p=2$ |
${x_{2}}>{x_{5}}>{x_{3}}>{x_{4}}>{x_{1}}$ |
|
$p=3$ |
${x_{2}}>{x_{5}}>{x_{3}}>{x_{4}}>{x_{1}}$ |
From Table 7, we can observe that the results of ranking orders for suppliers based on the existing score functions of Yager (
2017), Wei
et al. (
2018), Liu and Wang (
2018), and Peng
et al. (
2018) compared to the proposed score function
$S{c_{F}}$ remain unchanged for the values
$\lambda =0,0.5$, and more or less different for
$\lambda =1$. However, a decision maker may select the diverse values of parameter
λ in accordance with his/her diverse preferences and attitudes in real and actual decision making cases. Therefore, the application range of proposed non-algorithmic ranking technique of p-ROFNs is wider than the existing ones, and it can flexibly handle more general decision information compared to the algorithmic ranking techniques. Moreover, by taking the new transformation function MIN into account, we can prevent violence of division by zero which may occur in the traditional version of VIKOR techniques. These superiorities of the proposed score-based MCDM algorithm compared to the existing approaches indicate that it is more suitable for the actual situations.