Informatica logo


Login Register

  1. Home
  2. Issues
  3. Volume 36, Issue 3 (2025)
  4. OWA Operators in Large-Scale Group Decis ...

Informatica

Information Submit your article For Referees Help ATTENTION!
  • Article info
  • Full article
  • More
    Article info Full article

OWA Operators in Large-Scale Group Decision-Making: An Analysis Based on Comprehensive Minimum Cost Consensus
Volume 36, Issue 3 (2025), pp. 557–588
Diego García-Zamora   Bapi Dutta   Álvaro Labella   Luis Martínez  

Authors

 
Placeholder
https://doi.org/10.15388/25-INFOR599
Pub. online: 5 September 2025      Type: Research Article      Open accessOpen Access

Received
1 September 2024
Accepted
1 August 2025
Published
5 September 2025

Abstract

Ordered Weighted Averaging (OWA) operators have been widely applied in Group Decision-Making (GDM) to fuse expert opinions. However, their effectiveness depends on the selection of an appropriate weighting vector, which remains a challenge due to limited research on its impact on Consensus Reaching Processes (CRPs). This paper addresses this gap by analysing the influence of different OWA weighting techniques on consensus formation, particularly in large-scale GDM (LSGDM) scenarios. To do so, we propose a Comprehensive Minimum Cost Consensus (CMCC) model that integrates OWA operators with classical consensus measures to enhance the decision-making process. Since existing OWA-based Minimum Cost Consensus (MCC) models struggle with computational complexity, we introduce linearized versions of the OWA-based CMCC model tailored for LSGDM applications. Furthermore, we conduct a detailed comparison of various OWA weight allocation methods, assessing their impact on consensus quality under different levels of expert participation and opinion polarization. Additionally, our linearized formulations significantly reduce the computational cost for OWA-based CMCC models, improving their scalability.

1 Introduction

Group Decision-Making (GDM) methods have demonstrated significant potential in addressing decision problems by incorporating qualitative assessments based on human expertise (Chen et al., 2021; García-Zamora et al., 2024). In recent years, societal demands coupled with the development of new technologies have led to complex decision-making situations involving hundreds of experts. These scenarios, commonly referred to as Large-Scale Group Decision-Making (LSGDM) problems, have garnered considerable attention in the research community (Wang et al., 2024).
In GDM settings, disagreements among decision-makers are inevitable (Butler and Rothstein, 2006). To mitigate these conflicts, Consensus Reaching Processes (CRPs) are employed to encourage experts to adjust their initial opinions, ultimately achieving a collectively agreed-upon solution (Guo et al., 2024). The central task in deriving this collective solution is the fusion of information, which is typically accomplished through aggregation operators. Of course, such aggregation is performed under consensus assumptions, which are usually modelled through a consensus measure.
Among the various aggregation operators available (Beliakov et al., 2016), the Ordered Weighted Averaging (OWA) operator, introduced by Yager, stands out due to its ability to aggregate information based on the order of input data (Yager, 1988). The OWA operator has been extensively applied in aggregating expert opinions in numerous GDM problems (Sirbiladze, 2021a, 2021b; Xu et al., 2021; Liu et al., 2023). For instance, the model designed by Herrera-Viedma et al. (2002), one of the earliest feedback mechanism-based consensus approaches in the literature, utilizes OWA operators within a framework that accommodates various preference structures, although it did not address how to choose the weights for the OWA operator. OWA operators have also been integrated into the widely-used Minimum Cost Consensus (MCC) models proposed by Ben-Arieh and Easton (2007). Concretely, Zhang et al. (2011) generalized MCC models by incorporating different aggregation operators including the OWA operator but this consensus model is only applicable to particular OWA weight configurations. To address the issue of weight restriction, Zhang et al. (2013) proposed a Binary Linear Programming (BLP) formulation for MCC under OWA constraints; however, the applicability of this model in LSGDM scenarios was not explored. Xu et al. (2021) utilized the MCC model under OWA to explore the impact of decision rules dictated via specific weights and non-cooperative behaviour represented by the cost of changing opinions on resources of the consensus interpreted through the objective of MCC. More recently, MCC models featuring quadratic cost functions and OWA operators have been investigated by Zhang et al. (2022) without exploring the impact of different OWA weight mechanisms and LSGDM applicability. Additionally, Qu et al. (2023) examined MCC models in which the cost associated with the opinion modifications is uncertain via a robust optimization approach for a small GDM problem.
Although OWA operators have been extensively utilized for aggregating expert opinions, their impact on consensus formation under different weighting schemes remains insufficiently explored. Without a thorough analysis, it is difficult to determine the most effective mechanisms for generating OWA weights in different situations, particularly in scenarios where expert opinions are polarized. Since MCC models are automatic, they provide the experts’ modified preferences avoiding the potential biases introduced by feedback mechanisms, which makes them an appropriate tool to study the impact of OWA weights on the CRP. However, the state-of-the-art OWA-based MCC models cannot be used for this purpose. On the one hand, the existing OWA-based MCC approaches do not consider the consensus measures that are used in classical CRPs. On the other hand, extant proposals are not suitable for managing LSGDM scenarios because the non-linearity derived from the OWA operator implies complex optimization problems that existing solvers may not be able to address in a reasonable time.
Our study bridges these gaps by introducing an OWA-based Comprehensive MCC (CMCC) model, which integrates OWA-based aggregation with classical consensus measures, making it more suitable for LSGDM contexts. Additionally, we reformulate the optimization framework to enhance computational feasibility, enabling the model to scale efficiently. Using such reformulations, the resulting OWA-CMCC models enable the study of the impact of the different weighting mechanisms for consensus formation under different scenarios that consider different numbers of experts and opinion polarization.
Therefore, this paper has a twofold objective: (i) to introduce OWA-MCC, which accounts for classical consensus measures and is applicable in LSGDM, and (ii) to investigate the role of OWA operators in CRPs. Specifically, we address the following research questions:
  • • RQ1: How to integrate classical consensus measures into OWA-MCC models that can be applied in LSGDM?
  • • RQ2: How do different weighting techniques affect a CRP?
  • • RQ3: How do different OWA weights perform in LSGDM?
In response, we first develop a Comprehensive Minimum Cost Consensus (CMCC) (Labella et al., 2020) model based on OWA operators, which adjusts experts’ opinions to be as close as possible to their original preferences while satisfying a predefined consensus condition, as established by classical consensus measures (García-Zamora et al., 2023). This model ensures an objective examination of OWA operators in consensus processes by eliminating potential biases introduced by feedback mechanisms (Herrera-Viedma et al., 2002). Subsequently, we introduce several alternative versions of the (OWA-CMCC) model, incorporating constraints that offer specific performance advantages. A detailed analysis of the computational cost associated with each of these alternative versions is then conducted. Finally, the (OWA-CMCC) models are applied to assess the effects of different weighting techniques on the consensus process across varying scales and scenarios, defined by the number of experts involved and the polarization of their opinions.
The remainder of this paper is structured as follows. Section 2 presents the necessary background for understanding the proposed approach. Section 3 introduces various formulations of the (OWA-CMCC) model, followed by an analysis of their computational costs. In Section 4, we evaluate the performance of the OWA operator under different consensus scenarios, considering factors such as polarized opinions, the number of experts involved, and the computation of OWA weights. Finally, Section 5 summarizes the key findings and suggests future research directions.

2 Preliminaries

In this section the main notions regarding GDM, consensus, the OWA operator, and the CMCC models are presented.

2.1 Group Decision-Making and Consensus

The complexity of modern decision-making scenarios necessitates the incorporation of multiple perspectives, which is achieved through GDM. GDM involves a panel of experts collaborating to select the most suitable alternative from a set of potential solutions. Typically, a GDM problem is defined by a finite set of alternatives or possible solutions $X=\{{x_{1}},{x_{2}},\dots ,{x_{n}}\}$, with $n\geqslant 2$, and a group of experts $E=\{{e_{1}},{e_{2}},\dots ,{e_{m}}\}$, with $m\geqslant 2$, who evaluate these alternatives.
infor599_g001.jpg
Fig. 1
GDM resolution scheme.
Traditionally, GDM problems are resolved by aggregating the preferences of the experts regarding the alternatives, and then ranking these alternatives based on the aggregated preferences (see Fig. 1). While this approach is widely used, it overlooks a critical issue: conflicts can arise during the decision-making process when multiple experts are involved. Directly aggregating preferences without addressing these conflicts can lead to situations where some experts may disagree with the final decision, undermining the effectiveness of the GDM process (García-Zamora et al., 2025).
infor599_g002.jpg
Fig. 2
Feedback vs. Automatic CRP.
To mitigate such conflicts, CRPs are employed. A CRP is an iterative process designed to bring experts’ opinions closer together, fostering a consensual solution before a final decision is made. Typically, a moderator identifies the most significant disagreements among the experts and provides recommendations on how they can adjust their initial opinions to improve consensus (Herrera-Viedma et al., 2002). In some cases, the role of the moderator is replaced by an automatic mechanism that detects conflicts and adjusts experts’ opinions without seeking their approval. The classical CRP framework involves the following steps (see Fig. 2):
  • 1. Gathering Preferences: The assessments from experts regarding the alternatives are collected and modelled using preference structures.
  • 2. Consensus Measurement: The degree of consensus within the group, denoted as μ, is calculated using an appropriate consensus measure.
  • 3. Consensus Control: The calculated consensus level μ is compared to a predefined threshold ${\mu _{0}}$. The CRP terminates either when ${\mu _{0}}$ is achieved or when the maximum number of iterations (${r_{\max }}$) is reached. If the consensus level is insufficient, modifications to the experts’ preferences are required.
  • 4. Recommendation Process: Experts are advised to adjust their assessments to enhance consensus within the group. Depending on the type of CRP, this step can be implemented using one of two approaches:
    • • Feedback Mechanism: The moderator identifies the areas of disagreement and recommends changes to the experts, who can then choose to accept or reject these suggestions. Once feedback is provided, a new discussion round begins, repeating the process.
    • • Automatic Changes: Instead of a moderator, an automatic system detects disagreements and adjusts experts’ assessments without seeking their approval.
CRPs play a crucial role in addressing conflicts in GDM, ensuring that the final decision reflects a higher level of agreement among the experts, and thereby increasing the likelihood of a more accepted and effective outcome.

2.2 OWA Operators in Consensus

The OWA operator aggregates input values by ordering them first and then assigning weights based on this order (Beliakov et al., 2016). Formally, the OWA operator is defined as follows:
Definition 1 (OWA operator, Yager, 1993, 1996).
Let $\omega \in {[0,1]^{n}}$ be a weighting vector such that ${\textstyle\sum _{i=1}^{m}}{\omega _{i}}=1$. The OWA operator ${\Psi _{\omega }}:{[0,1]^{n}}\to [0,1]$ associated with ω is defined by:
\[ {\Psi _{\omega }}(x)={\sum \limits_{k=1}^{n}}{\omega _{k}}{x_{\sigma (k)}},\hspace{1em}\text{for}\hspace{2.5pt}x\in {[0,1]^{n}},\]
where σ is a permutation of the n-tuple $(1,2,\dots ,n)$ such that ${x_{\sigma (1)}}\geqslant {x_{\sigma (2)}}\geqslant \cdots \geqslant {x_{\sigma (n)}}$.
To determine the appropriate weights, Yager (1996) proposed using fuzzy linguistic quantifiers, which are functions that express fuzzy terms like “most” or “at least half”. Specifically, for a Regular Increasing Monotonous Quantifier (RIMQ), an increasing function $Q:[0,1]\to [0,1]$ satisfying $Q(0)=0$ and $Q(1)=1$, the OWA weights for aggregating $n\in \mathbb{N}$ elements are calculated as:
\[ {\omega _{k}}=Q\bigg(\frac{k}{n}\bigg)-Q\bigg(\frac{k-1}{n}\bigg),\hspace{1em}\text{for}\hspace{2.5pt}k=1,2,\dots ,n.\]
In CRP literature, this approach to generating OWA weights has been widely applied (Herrera-Viedma et al., 2002; Palomares et al., 2014). One commonly used RIMQ is the Linear Quantifier (LQ) ${Q_{\alpha ,\beta }}:[0,1]\to [0,1]$, defined by:
\[ {Q_{\alpha ,\beta }}(x)=\left\{\begin{array}{l@{\hskip4.0pt}l}0,\hspace{1em}& \text{if}\hspace{2.5pt}0\leqslant x\lt \alpha ,\\ {} \frac{x-\alpha }{\beta -\alpha },\hspace{1em}& \text{if}\hspace{2.5pt}\alpha \leqslant x\leqslant \beta ,\\ {} 1,\hspace{1em}& \text{if}x\geqslant \beta ,\end{array}\right.\]
which allows adjustments to the weight assigned to intermediate information by tuning the parameters α and β.
Despite the popularity of the LQ in CRP literature (Herrera-Viedma et al., 2002; Palomares et al., 2014), García-Zamora et al. (2022) identified several limitations when addressing consensus problems, including:
  • • Ignoring the most extreme information in the aggregation process, leading to low entropy measures.
  • • Producing biased results (orness measure different from 0.5) if $\alpha +\beta \ne 1$.
To overcome these drawbacks, García-Zamora et al. (2022) introduced Extreme Values Reductions (EVRs) as an alternative method for computing OWA weights.
Definition 2 (Extreme Values Reduction, García-Zamora et al., 2022).
Let $\hat{D}:[0,1]\to [0,1]$ be a function satisfying the following conditions:
  • 1. $\hat{D}$ is an automorphism on the interval $[0,1]$.
  • 2. $\hat{D}$ is continuously differentiable (class ${\mathcal{C}^{1}}$).
  • 3. $\hat{D}$ satisfies $\hat{D}(x)=1-\hat{D}(1-x)$ for all $x\in [0,1]$.
  • 4. ${\hat{D}^{\prime }}(0)\lt 1$ and ${\hat{D}^{\prime }}(1)\lt 1$.
  • 5. $\hat{D}$ is convex near 0 and concave near 1.
Then, $\hat{D}$ is called an Extreme Values Reduction (EVR) on the interval $[0,1]$.
As illustrated in Fig. 3, EVRs remap points within the interval $[0,1]$, increasing the distances between intermediate values and reducing the distances between extreme values. Furthermore, the distances among extreme values shrink progressively as they approach 0 or 1. A complete discussion regarding the rationale of the properties of EVRs may be found in García-Zamora et al. (2021).
infor599_g003.jpg
Fig. 3
Sketch of an EVR.
Examples of EVRs include the sinusoidal family ${s_{\alpha }}:[0,1]\to [0,1]$, where $\alpha \in (0,\frac{1}{2\pi }]$, defined as:
\[ {\hat{s}_{\alpha }}(x)=x+\alpha \cdot \sin (2\pi x-\pi ),\hspace{1em}\text{for}\hspace{2.5pt}x\in [0,1],\]
and polynomial functions ${p_{\alpha }}:[0,1]\to [0,1]$, where $\alpha \in (0,1]$, defined as:
\[ {p_{\alpha }}(x)=(1-\alpha )x+3\alpha {x^{2}}-2\alpha {x^{3}},\hspace{1em}\text{for}\hspace{2.5pt}x\in [0,1].\]
The EVR-OWA operator builds on the classical OWA operator by using EVRs to determine the weights for aggregation (García-Zamora et al., 2022).
Definition 3 (EVR-OWA operator, García-Zamora et al., 2022).
Let $\hat{D}$ be an Extreme Values Reduction, and consider $n\in \mathbb{N}$. Then, the weight vector ${\omega ^{\hat{D}}}=({\omega _{1}^{\hat{D}}},{\omega _{2}^{\hat{D}}},\dots ,{\omega _{n}^{\hat{D}}})$ is defined as:
\[ {\omega _{k}^{\hat{D}}}=\hat{D}\bigg(\frac{k}{n}\bigg)-\hat{D}\bigg(\frac{k-1}{n}\bigg),\hspace{1em}\text{for}\hspace{2.5pt}k=1,2,\dots ,n.\]
The OWA operator ${\Psi _{{\omega ^{\hat{D}}}}}$ associated with these weights is called the EVR-OWA operator.
The EVR-OWA operator ensures unbiased aggregations that emphasize intermediate values without neglecting extreme opinions, making it a robust tool for handling consensus-based aggregations in CRPs (García-Zamora et al., 2022).

2.3 Minimum Cost Consensus

Ben-Arieh and Easton (2007) characterized the concept of MCC to model CRPs in terms of mathematical optimization problems. In their proposal, they reformulate the consensus process as a minimization problem in which the objective function is given by the cost of modifying experts’ preferences. They also included a constraint to guarantee that the experts’ final opinions are close enough to the collective opinion.
(MCC)
\[ \begin{aligned}{}\text{min}& \hspace{5.0pt}{\sum \limits_{k=1}^{m}}{c_{k}}|{\overline{o}_{k}}-{o_{k}}|,\\ {} \text{s.t.}& \hspace{5.0pt}\left\{|{\overline{o}_{k}}-\overline{g}|\leqslant \varepsilon ,\hspace{2.5pt}k=1,2,\dots ,m,\hspace{1em}\right.\end{aligned}\]
where $({\overline{o}_{1}},\dots ,{\overline{o}_{m}})$ are the adjusted experts’ opinions, $\overline{g}$ is the collective opinion, and ε is the maximum acceptable distance of each expert to the collective opinion.
Lately, Zhang et al. (2011) analysed the influence of the selected aggregation operator utilized to derive the collective opinion in the computation of the consensus level. In particular, this MCC model based on the OWA operator can be described as follows:
(OWA-MCC)
\[ \begin{aligned}{}\text{min}& \hspace{5.0pt}{\sum \limits_{k=1}^{m}}{c_{k}}|{\overline{o}_{k}}-{o_{k}}|.\\ {} \text{s.t.}& \hspace{5.0pt}\left\{\begin{array}{l}\overline{g}={\Psi _{\omega }}({\overline{o}_{1}},{\overline{o}_{2}},\dots ,{\overline{o}_{m}}),\hspace{1em}\\ {} |{\overline{o}_{k}}-\overline{g}|\leqslant \varepsilon ,\hspace{2.5pt}k=1,2,\dots ,m,\hspace{1em}\end{array}\right.\end{aligned}\]
where ${\Psi _{\omega }}:{[0,1]^{n}}\to [0,1]$ is the OWA operator used to obtain the collective opinion from experts’ opinions.
Even though MCC models provide modified opinions that are close to the collective opinions by minimizing a cost function, they neglect the use of the traditional consensus measures used in the CRP literature (García-Zamora et al., 2023). In this sense, the MCC models cannot guarantee to achieve a desired level of agreement. For this reason, Labella et al. (2020) introduced the notion of CMCC models, which are mathematically described below:
(CMCC)
\[ \begin{aligned}{}\text{min}& \hspace{5.0pt}{\sum \limits_{k=1}^{m}}{c_{k}}|{\overline{o}_{k}}-{o_{k}}|,\\ {} \text{s.t.}& \hspace{5.0pt}\left\{\begin{array}{l}\overline{g}={\textstyle\textstyle\sum _{k=1}^{m}}{w_{k}}{\overline{o}_{k}},\hspace{1em}\\ {} |{\overline{o}_{k}}-\overline{g}|\leqslant \varepsilon ,\hspace{2.5pt}k=1,2,\dots ,m,\hspace{1em}\\ {} \kappa ({\overline{o}_{1}},{\overline{o}_{2}},\dots ,{\overline{o}_{m}})\geqslant {\mu _{0}},\hspace{1em}\end{array}\right.\end{aligned}\]
where the function $\kappa :{[0,1]^{m}}\to [0,1]$ measures the consensus level within the group experts and ${\mu _{0}}\in [0,1]$ is the consensus threshold.
Since the consensus level can be computed from two types of consensus measures, Labella et al. (2020) proposed two different ways to derive the consensus in the (CMCC) models:
  • • Based on the distance between experts and collective opinions:
    \[ {\kappa _{1}}({\overline{o}_{1}},{\overline{o}_{2}},\dots ,{\overline{o}_{m}})=1-{\sum \limits_{k=1}^{m}}{w_{k}}|{\overline{o}_{k}}-\overline{g}|;\]
  • • Based on the distance among experts:
    \[ {\kappa _{2}}({\overline{o}_{1}},{\overline{o}_{2}},\dots ,{\overline{o}_{m}})=1-{\sum \limits_{k=1}^{m-1}}{\sum \limits_{l=k+1}^{m}}\frac{{w_{k}}+{w_{l}}}{m-1}|{\overline{o}_{k}}-{\overline{o}_{l}}|,\]
    where $w=({w_{1}},{w_{2}},\dots ,{w_{m}})$ are the experts’ weights.

3 OWA Operators in CMCC Models

On the one hand, the (CMCC) model allows the consensus to be measured using classical consensus measures, but uses a weighted average operator to compute the collective opinion. On the other hand, the (OWA-MCC) model integrates the OWA operator in the optimization problem but does not consider such classical consensus measures. Consequently, our starting point is extending (CMCC) models to account for OWA aggregations.
(OWA-CMCC1)
\[ \begin{aligned}{}{\text{min}_{\overline{o}}}& \hspace{5.0pt}{\sum \limits_{k=1}^{m}}{c_{k}}|{\overline{o}_{k}}-{o_{k}}|,\\ {} \text{s.t.}& \hspace{5.0pt}\left\{\begin{array}{l}\overline{g}={\Psi _{\omega }}({\overline{o}_{1}},{\overline{o}_{2}},\dots ,{\overline{o}_{n}}),\hspace{1em}\\ {} |{\overline{o}_{k}}-\overline{g}|\leqslant \varepsilon ,\hspace{1em}i=1,2,\dots ,m,\hspace{1em}\\ {} 1-{\textstyle\textstyle\sum _{k=1}^{m}}{w_{k}}|{\overline{o}_{k}}-\overline{g}|\geqslant {\mu _{0}}.\hspace{1em}\end{array}\right.\end{aligned}\]
(OWA-CMCC2)
\[ \begin{aligned}{}{\text{min}_{\overline{o}}}& \hspace{5.0pt}{\sum \limits_{k=1}^{m}}{c_{k}}|{\overline{o}_{k}}-{o_{k}}|,\\ {} \text{s.t.}& \hspace{5.0pt}\left\{\begin{array}{l}\overline{g}={\Psi _{\omega }}({\overline{o}_{1}},{\overline{o}_{2}},\dots ,{\overline{o}_{n}}),\hspace{1em}\\ {} |{\overline{o}_{k}}-\overline{g}|\leqslant \varepsilon ,\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} 1-{\textstyle\textstyle\sum _{k=1}^{m-1}}{\textstyle\textstyle\sum _{l=k+1}^{m}}\frac{{w_{k}}+{w_{l}}}{m-1}|{\overline{o}_{k}}-{\overline{o}_{l}}|\geqslant {\mu _{0}},\hspace{1em}\end{array}\right.\end{aligned}\]
where ${c_{1}},\dots ,{c_{m}}$ are the cost of modifying experts’ opinions, $\overline{o}=({\overline{o}_{1}},\dots ,{\overline{o}_{m}})$ are experts’ modified opinions, $o=({o_{1}},\dots ,{o_{m}})$ are experts’ original opinions, $\varepsilon \in [0,1]$, and ${\mu _{0}}\in [0,1[$ are the consensus parameters, $\omega =({\omega _{1}},{\omega _{2}},\dots ,{\omega _{m}})$ are the OWA weights and $w=({w_{1}},\dots ,{w_{n}})$ are experts’ weights.
These two models offer the advantage of incorporating both consensus measures and OWA operators simultaneously in MCC. However, they also present a significant drawback: because the OWA operator involves a non-linear operation that requires sorting the input values, solving these non-linear programming models with standard optimization solvers necessitates additional reformulation to make the problem tractable.

3.1 Binary Linear Programming Based Formulation

Our first approach to reformulate the models (OWA-CMCC1) and (OWA-CMCC2) requires the introduction of binary variables in the system, resulting in a BLP problem.
Theorem 1.
The model (OWA-CMCC1) can be transformed into an equivalent BLP problem with the help of the following linear transformations:
\[\begin{aligned}{}& {\overline{o}_{k}}-{o_{k}}={u_{k}}\hspace{1em}\textit{and}\hspace{1em}|{\overline{o}_{k}}-{o_{k}}|={v_{k}},\hspace{1em}k=1,2,\dots ,m,\\ {} & {\overline{o}_{k}}-\overline{g}={y_{k}}\hspace{1em}\textit{and}\hspace{1em}|{\overline{o}_{k}}-\overline{g}|={z_{k}},\hspace{1em}k=1,2,\dots ,m,\end{aligned}\]
resulting in the optimization problem:
(OWA-CMCC1-BLP)
\[\begin{aligned}{}& \min {\sum \limits_{k=1}^{m}}{c_{k}}{v_{k}},\end{aligned}\]
(1)
\[\begin{aligned}{}& \textit{s.t.}\left\{\begin{array}{l@{\hskip4.0pt}l}\overline{g}={\textstyle\textstyle\sum _{k=1}^{m}}{\omega _{k}}{t_{k}},\hspace{2em}& \text{(a)}\\ {} {t_{k}}\leqslant {\overline{o}_{d}}+M{G_{kd}},\hspace{1em}k,d=1,2,\dots ,m,\hspace{2em}& \text{(b)}\\ {} {t_{k}}\leqslant {\overline{o}_{d}}-M{F_{kd}}\hspace{1em}k,d=1,2,\dots ,m,\hspace{2em}& \text{(c)}\\ {} {\textstyle\textstyle\sum _{d=1}^{m}}{G_{kd}}\leqslant m-k,\hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(d)}\\ {} {\textstyle\textstyle\sum _{d=1}^{m}}{F_{kd}}\leqslant k-1,\hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(e)}\\ {} {\overline{o}_{k}}-{o_{k}}={u_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(f)}\\ {} {u_{k}}\leqslant {v_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(g)}\\ {} -{u_{k}}\leqslant {v_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(h)}\\ {} {\overline{o}_{k}}-\overline{g}={y_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(i)}\\ {} {y_{k}}\leqslant {z_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(j)}\\ {} -{y_{k}}\leqslant {z_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(k)}\\ {} {z_{k}}\leqslant \varepsilon ,\hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(l)}\\ {} 1-{\textstyle\textstyle\sum _{k=1}^{m}}{w_{k}}{z_{k}}\geqslant {\mu _{0}},\hspace{2em}& \text{(m)}\\ {} {G_{kd}},{F_{kd}}\in \{0,1\},\hspace{1em}k,d=1,2,\dots ,m,\hspace{2em}& \text{(n)}\end{array}\right.\end{aligned}\]
where ${t_{d}}$ $(d=1,2,\dots ,m)$ is the d-th largest value in $\{{\overline{o}_{1}},{\overline{o}_{2}},\dots ,{\overline{o}_{m}}\}$ and M is a positive large number.
Proof.
In the CMCC model (OWA-CMCC1-BLP), the constraints (1i)–(1k) transform the absolute values $|{\overline{o}_{k}}-\overline{o}|$ ∀ $k=1,\dots ,m$ into linear constraints, while the constraints (1l) and (1m) ensure that $|{\overline{o}_{k}}-\overline{o}|\leqslant \varepsilon $ ∀ $k=1,\dots ,m$ and the consensus level constraint are satisfied. The constraints (1a)–(1e) and (1m) allow linearizing the nonlinear OWA constraint by the use of the linear constraints with binary variables ${G_{ki}}$ and ${F_{ki}}$ for reordering steps of OWA (Lemma 3, Zhang et al. (2013)). Further the transformation, $|{\overline{o}_{k}}-{o_{k}}|={v_{k}}\hspace{2.5pt}\forall \hspace{2.5pt}k=1,\dots ,m$, dictates that the objective functions of (OWA-CMCC1) and (OWA-CMCC1-BLP) are the same while the constraints (1f)–(1h) make sure that absolute values’ property holds $|{u_{k}}|={v_{k}}\hspace{2.5pt}\forall \hspace{2.5pt}k$. Therefore, (OWA-CMCC1-BLP) is the equivalent BLP model of the non-linear model (OWA-CMCC1).  □
In the following theorem, we establish the relation between the optimal cost of moving opinions by models (OWA-CMCC2-BLP) and (OWA-CMCC2-BLP).
Theorem 2.
The model (OWA-CMCC2) can be transformed into an equivalent BLP problem with the following transformations:
\[\begin{aligned}{}& {\overline{o}_{k}}-{o_{k}}={u_{k}}\hspace{1em}\textit{and}\hspace{1em}|{\overline{o}_{k}}-{o_{k}}|={v_{k}},\hspace{1em}k=1,2,\dots ,m,\\ {} & {\overline{o}_{k}}-{\overline{o}_{l}}={y_{kl}}\hspace{1em}\textit{and}\hspace{1em}|{\overline{o}_{k}}-{\overline{o}_{l}}|={z_{kl}},\hspace{1em}k=1,2,\dots ,m-1,\hspace{2.5pt}l=k+1,\dots ,m,\end{aligned}\]
resulting in the following model:
(OWA-CMCC2-BLP)
\[ \begin{aligned}{}\min & \hspace{5.0pt}{\sum \limits_{k=1}^{m}}{c_{k}}{v_{k}}\\ {} \textit{s.t.}& \hspace{5.0pt}\left\{\begin{array}{l}\overline{g}={\textstyle\textstyle\sum _{k=1}^{m}}{\omega _{k}}{t_{k}},\hspace{1em}\\ {} {t_{k}}\leqslant {\overline{o}_{d}}+M{G_{kd}},\hspace{1em}k,d=1,2,\dots ,m,\hspace{1em}\\ {} {t_{k}}\leqslant {\overline{o}_{d}}-M{F_{kd}}\hspace{1em}k,d=1,2,\dots ,m,\hspace{1em}\\ {} {\textstyle\textstyle\sum _{d=1}^{m}}{G_{kd}}\leqslant m-k,\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} {\textstyle\textstyle\sum _{d=1}^{m}}{F_{kd}}\leqslant k-1,\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} {\overline{o}_{k}}-\overline{g}\leqslant \varepsilon ,\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} \overline{g}-{\overline{o}_{k}}\leqslant \varepsilon ,\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} {\overline{o}_{k}}-{o_{k}}={u_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} {u_{k}}\leqslant {v_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} -{u_{k}}\leqslant {v_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} {\overline{o}_{{k_{1}}}}-{\overline{o}_{{k_{2}}}}={y_{{k_{1}}{k_{2}}}},\hspace{1em}{k_{1}}=1,2,\dots ,m-1,\hspace{2.5pt}{k_{2}}={k_{1}}+1,\dots ,m,\hspace{1em}\\ {} {y_{{k_{1}}{k_{2}}}}\leqslant {z_{{k_{1}}{k_{2}}}},\hspace{1em}{k_{1}}=1,2,\dots ,m-1,\hspace{2.5pt}{k_{2}}={k_{1}}+1,\dots ,m,\hspace{1em}\\ {} -{y_{{k_{1}}{k_{2}}}}\leqslant {z_{{k_{1}}{k_{2}}}},\hspace{1em}{k_{1}}=1,2,\dots ,m-1,\hspace{2.5pt}{k_{2}}={k_{1}}+1,\dots ,m,\hspace{1em}\\ {} 1-{\textstyle\textstyle\sum _{{k_{1}}=1}^{m-1}}{\textstyle\textstyle\sum _{{k_{2}}={k_{1}}+1}^{m}}\frac{{w_{{k_{1}}}}+{w_{{k_{2}}}}}{m-1}{z_{{k_{1}}{k_{2}}}}\geqslant {\mu _{0}},\hspace{1em}\\ {} {G_{kd}},\hspace{2.5pt}{F_{kd}}\in \{0,1\},\hspace{1em}k,d=1,2,\dots ,m,\hspace{1em}\\ {} {u_{k}},{v_{k}}\geqslant 0,\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} {y_{{k_{1}}{k_{2}}}},{z_{{k_{1}}{k_{2}}}}\geqslant 0,\hspace{1em}{k_{1}}=1,2,\dots ,m-1,\hspace{2.5pt}{k_{2}}={k_{1}}+1,\dots ,m.\hspace{1em}\end{array}\right.\end{aligned}\]
Proof.
The proof of this result follows the lines of the Theorem 1.  □
Computational Issues
In practice, the BLP models (OWA-CMCC1-BLP) and (OWA-CMCC2-BLP) can be used to find the optimal consensus opinion under OWA aggregation and generic weights and costs of the experts. However, theoretically, the BLP problem is NP-hard, and it may take exponential time to find an optimal solution, even with the most advanced state-of-the-art BLP solver, Gurobi (Gurobi Optimization, LLC, 2022). Here, we attempt to show that the time required to find the exact solution shall grow exponentially in the worst-case scenarios with the increase in the number of experts. For all experiments reported in this manuscript, we have used JuMP (Julia for Mathematical Programming), a domain-specific modelling language for mathematical optimization embedded in Julia (Dunning et al., 2017; Bezanson et al., 2017). Specifically, optimization experiments are conducted in Julia 1.7.3 on a desktop with Windows 10 Professional OS, 2.5 GHz Intel Core i7-11700 CPU, and 16 GB RAM by invoking the Gurobi 9.0.3 optimizer.
For this experiment, first, we fix the number of experts whose opinions are generated from a uniform distribution on $[0,1]$ along with the associated costs, which are set randomly from the set $\{1,2,3,4,5\}$ with equal probability. To create instances to study computational cost, we generate the weight of the OWA using the RIMQs ${Q_{\alpha }}(x)={x^{\alpha }}\hspace{2.5pt}\forall \hspace{2.5pt}x\in [0,1]$ ($\alpha =1.4$), which produces decreasing weights for the ordered positions. In addition, we set the consensus parameters $\varepsilon =0.2$ and ${\mu _{0}}=0.7$. For a fixed number of experts, we randomly generate 100 instances of each GDM scenario with the above-mentioned setup and solve the respective optimization models to find the modified opinions by invoking the Gurobi solver. The solver run-time in each case is recorded, and we report the worst run-time among 100 instances. With the aim of experimenting in time time-bound manner, we set the solver run-time limit for solving each instance. With this setup, the worst-case run-time (in seconds) for solving different instances has been reported in Table 1.
We found that, when the number of experts in the group is more than or equal to 10, there are instances that can not be solved within the given time-limit1 by the solver and that has been indicated by the asterisk mark overrun time in Table 1. It indicates that finding the exact solution of CMCC under OWA aggregation with generic cost and weights of OWA is computationally very expensive, even for a group consisting of 10 experts.
Table 1
Time cost BLP-based models.
Consensus measure Models Experts
5 10 15 20
${\kappa _{1}}$ (CMCC1) 0.016 0.029 0.032 0.035
(OWA-CMCC1-BLP) 0.031 ${180^{\ast }}$ ${660^{\ast }}$ ${1500^{\ast }}$
${\kappa _{2}}$ (CMCC2) 0.017 0.031 0.034 0.036
(OWA-CMCC2-BLP) 0.037 ${180^{\ast }}$ ${660^{\ast }}$ ${1500^{\ast }}$
When the decision information (cost and weights) attached to a particular decision scenario is fixed, the time cost may vary with the consensus parameters, ε, and ${\mu _{0}}$. To understand the behaviour of these parameters on time complexity, we consider a GDM scenario with 5 experts and generate 100 instances randomly. Each instance has been solved for different variations of the parameters. We have reported the worst-case time for the specific configuration of the parameters in Table 2. It can be observed that time-cost varied widely with the change in the configuration of the parameters. Further, it may heavily depend on the decision information: cost, opinions, and weights.
Table 2
The time costs for different values of ε and ${\mu _{0}}$ in (OWA-CMCC1-BLP).
ε ${\mu _{0}}$
0.70 0.75 0.80 0.85 0.90 0.95
0.30 0.031 0.032 0.019 0.032 0.063 0.087
0.25 0.034 0.033 0.078 0.032 0.063 0.041
0.20 0.031 0.063 0.031 0.032 0.031 0.063
0.15 0.033 0.031 0.032 0.033 0.033 0.032
0.10 0.046 0.033 0.047 0.034 0.047 0.047
0.05 0.032 0.033 0.047 0.032 0.049 0.034

3.2 Linear Programming-Based Formulations

Even though we have proposed the BLP formulation of the models (OWA-CMCC1) and (OWA-CMCC2) in the previous section, it has been also found that the computational cost of solving these models grows exponentially with the increase of the number of experts and generic costs and weights, even for a moderate number of experts. To offer alternative approaches with better computational performance, we introduce below some completely linearized versions of the (OWA-CMCC) models under some assumptions on the parameters.

3.2.1 Equal Weighting Vector and Opinion Changing Cost for DMs

First, let us assume that both the weighting vector used in the consensus measure and the costs of moving experts’ preferences are equal to the vector $(\frac{1}{m},\frac{1}{m},\stackrel{m\hspace{2.5pt}\text{times}}{\cdots },\frac{1}{m})$. In such case, the linearized versions of the models (OWA-CMCC1) and (OWA-CMCC2) with the same costs and weights in the consensus measure are as follows:
Theorem 3.
Let $({\overline{o}_{1}},{\overline{o}_{2}},\dots ,{\overline{o}_{m}})$ be the optimal solution to the following optimization problem.
(OWA-CMCC1-LP)
\[ \begin{array}{l}\min \displaystyle \frac{1}{m}{\displaystyle \sum \limits_{k=1}^{m}}|{\overline{o}_{k}}-{o_{k}}|,\\ {} \text{s.t.}\left\{\begin{array}{l}\overline{g}={\textstyle\textstyle\sum _{k=1}^{m}}{\omega _{k}}{\overline{o}_{\sigma (i)}},\hspace{1em}\\ {} |{\overline{o}_{k}}-\overline{g}|\leqslant \varepsilon ,\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} 1-\frac{1}{m}{\textstyle\textstyle\sum _{k=1}^{m}}|{\overline{o}_{k}}-\overline{g}|\geqslant {\mu _{0}},\hspace{1em}\\ {} {\overline{o}_{\sigma (k)}}-{\overline{o}_{\sigma (k-1)}}\leqslant 0,\hspace{1em}k=2,\dots ,m,\hspace{1em}\end{array}\right.\end{array}\]
where σ is a permutation that decreasingly orders the original values of the preferences values ${o_{1}},{o_{2}},\dots ,{o_{m}}$ and ${\omega _{1}},{\omega _{2}},\dots ,{\omega _{m}}$ are the values of the weights used in the OWA aggregation. Then $({\overline{o}_{1}},{\overline{o}_{2}},\dots ,{\overline{o}_{m}})$ is also an optimal solution to (OWA-CMCC1).
Proof.
Let $\mathcal{R}=\{x\in {[0,1]^{n}}\hspace{2.5pt}:\hspace{2.5pt}|{x_{k}}-{\Psi _{\omega }}(x)|\leqslant \varepsilon ,1-\frac{1}{m}{\textstyle\sum _{k=1}^{m}}|{x_{k}}-{\Psi _{\omega }}(x)|\geqslant {\mu _{0}}\}$ denote the feasible region for OWA-CMCC1 model when $w=(\frac{1}{m},\cdots \hspace{0.1667em},\frac{1}{m},)$. Furthermore, note that $x\in \mathcal{R}\hspace{0.2778em}\Longleftrightarrow \hspace{0.2778em}{x_{\sigma }}\in \mathcal{R}$ for any permutation σ of the components of the x. Since all the costs are equal, the objective function is minimal when we order the values of ${\overline{o}_{1}}=(\overline{o},\cdots \hspace{0.1667em},{\overline{o}_{m}})$ in the same way as $o=({o_{1}},\cdots \hspace{0.1667em},{o_{m}})$. Using both facts together, we realize that the solution of (OWA-CMCC1-LP) also minimizes (OWA-CMCC1).  □
Theorem 4.
Let $({\overline{o}_{1}},{\overline{o}_{2}},\dots ,{\overline{o}_{m}})$ be the optimal solution to the following optimization problem:
(OWA-CMCC2-LP)
\[ \begin{array}{l}\min \displaystyle \frac{1}{m}{\displaystyle \sum \limits_{k=1}^{m}}|{\overline{o}_{k}}-{o_{k}}|,\\ {} \text{s.t.}\left\{\begin{array}{l}\overline{g}={\textstyle\textstyle\sum _{k=1}^{m}}{\omega _{i}}{\overline{o}_{\sigma (k)}},\hspace{1em}\\ {} |{\overline{o}_{k}}-\overline{g}|\leqslant \varepsilon ,\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} 1-\frac{2}{m(m-1)}{\textstyle\textstyle\sum _{k=1}^{m-1}}{\textstyle\textstyle\sum _{l=k+1}^{m}}|{\overline{o}_{k}}-{\overline{o}_{l}}|\geqslant {\mu _{0}},\hspace{1em}\\ {} {\overline{o}_{\sigma (k)}}-{\overline{o}_{\sigma (k-1)}}\leqslant 0,\hspace{1em}k=2,\dots ,m,\hspace{1em}\end{array}\right.\end{array}\]
where σ is a permutation that decreasingly orders the original values of the preferences values ${o_{1}},{o_{2}},\dots ,{o_{m}}$ and $\omega =({\omega _{1}},{\omega _{2}},\dots ,{\omega _{m}})$ be the weight vector used in the OWA aggregation. Then $({\overline{o}_{1}},{\overline{o}_{2}},\dots ,{\overline{o}_{m}})$ is also an optimal solution to (OWA-CMCC2).
Proof.
The proof of this result is analogous to the previous one.  □

3.2.2 Decreasing Weighting Vector for OWA

When the OWA weighting vector is decreasing, we can also formulate (OWA-CMCC1) and (OWA-CMCC2) as fully linear programming problems. The basis of such a formulation stems from the representation of the OWA with decreasing weight vectors. Let $\omega =({\omega _{1}},{\omega _{2}},\dots ,{\omega _{m}})$ be a decreasing weight vector, i.e. ${\omega _{k}}\geqslant {\omega _{k+1}}$, $k=1,\dots ,m-1$. Then, the aggregation of the inputs $o=({o_{1}},{o_{2}},\dots ,{o_{m}})$ by the OWA operator can be represented as follows:
\[ {\Psi _{\omega }}({o_{1}},{o_{2}},\dots ,{o_{m}})={\sum \limits_{k=1}^{m}}({\omega _{i}}-{\omega _{i+1}}){L_{k}}(o),\]
where ${\omega _{m+1}}=0$ and ${L_{k}}(o)={\textstyle\sum _{s=1}^{k}}{o_{\sigma (s)}}$, where ${o_{\sigma (s)}}$ is the s-th largest of $({o_{1}},{o_{2}},\dots ,{o_{m}})$ and σ is the corresponding permutation. Additionally, given $o=({o_{1}},{o_{2}},\dots ,{o_{m}})$, the value ${L_{k}}(o)$ can be found by solving the linear programming model (Galand and Spanjaard, 2012):
\[\begin{aligned}{}\text{max}& \hspace{5.0pt}{L_{k}}(o)=k{b_{k}}+{\sum \limits_{d=1}^{m}}{\eta _{kd}},\\ {} \text{s.t.}& \hspace{5.0pt}\left\{\begin{array}{l}{b_{k}}+{\eta _{kd}}\geqslant {o_{d}},\hspace{1em}d=1,\dots ,m,\hspace{1em}\\ {} {\eta _{kd}}\geqslant 0,\hspace{1em}d=1,\dots ,m.\hspace{1em}\end{array}\right.\end{aligned}\]
This reformulation allows us to implement the OWA constraints as linear inequalities with the help of two auxiliary continuous variables. Based on the above facts, we can re-formulate the (OWA-CMCC1) as follows:
\[\begin{aligned}{}\text{min}& {\sum \limits_{k=1}^{m}}{c_{k}}|{o_{k}}-{\overline{o}_{k}}|,\\ {} \text{s.t}\hspace{2.5pt}& \left\{\begin{array}{l}\overline{g}={\textstyle\textstyle\sum _{k=1}^{m}}({\omega _{k}}-{\omega _{k+1}})(k{b_{k}}+{\textstyle\textstyle\sum _{d=1}^{m}}{\eta _{kd}}),\hspace{1em}\\ {} |{\overline{o}_{k}}-g|\leqslant \epsilon ,\hspace{1em}k=1,\dots ,m,\hspace{1em}\\ {} 1-{\textstyle\textstyle\sum _{k=1}^{m}}{w_{k}}|{\overline{o}_{k}}-g|\geqslant {\mu _{0}},\hspace{1em}\\ {} {b_{k}}+{\eta _{kd}}\geqslant {\overline{o}_{d}},\hspace{1em}k,d=1,\dots ,m,\hspace{1em}\\ {} {\eta _{kd}}\geqslant 0,\hspace{1em}k,d=1,\dots ,m.\hspace{1em}\end{array}\right.\end{aligned}\]
To deal with the LSGDM scenarios efficiently under decreasing weight vector, the equivalent linear version of the previous model is formulated in the following theorem.
Theorem 5.
The (OWA-CMCC1) model under OWA aggregation with decreasing weight vector can be transformed into an equivalent linear programming problem with the help of the following linear transformations:
\[\begin{aligned}{}& {\overline{o}_{k}}-{o_{k}}={u_{k}}\hspace{1em}\textit{and}\hspace{1em}|{\overline{o}_{k}}-{o_{k}}|={v_{k}},\hspace{1em}k=1,2,\dots ,m,\\ {} & {\overline{o}_{k}}-\overline{g}={y_{k}}\hspace{1em}\textit{and}\hspace{1em}|{\overline{o}_{k}}-\overline{g}|={z_{k}},\hspace{1em}k=1,2,\dots ,m,\end{aligned}\]
which lead to the linear programming problem:
(OWA-CMCC1-DW)
\[\begin{aligned}{}& \min {\sum \limits_{k=1}^{m}}{c_{k}}{v_{k}}\end{aligned}\]
(2)
\[\begin{aligned}{}& \text{s.t.}\left\{\begin{array}{l@{\hskip4.0pt}l}\overline{g}={\textstyle\textstyle\sum _{k=1}^{m}}({\omega _{k}}-{\omega _{k+1}})(k{b_{k}}+{\textstyle\textstyle\sum _{d=1}^{m}}{\eta _{kd}}),\hspace{2em}& \text{(a)}\\ {} {\overline{o}_{k}}-{o_{k}}={u_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(b)}\\ {} {u_{k}}\leqslant {v_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(c)}\\ {} -{u_{k}}\leqslant {v_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(d)}\\ {} {\overline{o}_{k}}-\overline{g}={y_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(e)}\\ {} {y_{k}}\leqslant {z_{k}}\hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(f)}\\ {} -{y_{k}}\leqslant {z_{k}}\hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(g)}\\ {} {z_{k}}\leqslant \varepsilon \hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(h)}\\ {} 1-{\textstyle\textstyle\sum _{k=1}^{m}}{w_{k}}{z_{k}}\geqslant {\mu _{0}},\hspace{2em}& \text{(i)}\\ {} {b_{k}}+{\eta _{kd}}\geqslant {\overline{o}_{d}},\hspace{1em}k,d=1,\dots ,m,\hspace{2em}& \text{(j)}\\ {} {\eta _{kd}}\geqslant 0,\hspace{1em}k,d=1,\dots ,m.\hspace{2em}& \text{(k)}\end{array}\right.\end{aligned}\]
Theorem 6.
The (OWA-CMCC2) model under OWA aggregation with decreasing weight vector can be converted into the equivalent linear programming problem using the transformation:
\[\begin{aligned}{}& {\overline{o}_{k}}-{o_{k}}={u_{k}}\hspace{1em}\textit{and}\hspace{1em}|{\overline{o}_{k}}-{o_{k}}|={v_{k}},\hspace{1em}k=1,2,\dots ,m,\\ {} & {\overline{o}_{k}}-{\overline{o}_{l}}={y_{kl}}\hspace{1em}\textit{and}\hspace{1em}|{\overline{o}_{k}}-{\overline{o}_{l}}|={z_{kl}},\hspace{1em}k=1,2,\dots ,m-1,\hspace{2.5pt}l=k+1,\dots ,m,\end{aligned}\]
and it can be formulated as the following linear programming model.
(OWA-CMCC2-DW)
\[ \begin{aligned}{}\min & \hspace{5.0pt}{\sum \limits_{k=1}^{m}}{c_{k}}{v_{k}}\\ {} \text{s.t.}& \hspace{5.0pt}\left\{\begin{array}{l}\overline{g}={\textstyle\textstyle\sum _{k=1}^{m}}({\omega _{k}}-{\omega _{k+1}})(k{b_{k}}+{\textstyle\textstyle\sum _{d=1}^{m}}{\eta _{kd}}),\hspace{1em}\\ {} {\overline{o}_{k}}-\overline{g}\leqslant \varepsilon ,\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} \overline{g}-{\overline{o}_{k}}\leqslant \varepsilon ,\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} {\overline{o}_{k}}-{o_{k}}={u_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} {u_{k}}\leqslant {v_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} -{u_{k}}\leqslant {v_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} {\overline{o}_{k}}-{\overline{o}_{l}}={y_{kl}},\hspace{1em}k=1,2,\dots ,m-1,\hspace{2.5pt}l=k+1,\dots ,m,\hspace{1em}\\ {} {y_{kl}}\leqslant {z_{kl}},\hspace{1em}k=1,2,\dots ,m-1,\hspace{2.5pt}l=k+1,\dots ,m,\hspace{1em}\\ {} -{y_{kl}}\leqslant {z_{kl}},\hspace{1em}k=1,2,\dots ,m-1,\hspace{2.5pt}l=k+1,\dots ,m,\hspace{1em}\\ {} 1-{\textstyle\textstyle\sum _{k=1}^{m-1}}{\textstyle\textstyle\sum _{l=k+1}^{m}}\frac{{w_{k}}+{w_{l}}}{m-1}{z_{kl}}\geqslant {\mu _{0}},\hspace{1em}\\ {} {b_{k}}+{\eta _{kd}}\geqslant {\overline{o}_{d}},\hspace{1em}k,d=1,\dots ,m,\hspace{1em}\\ {} {\eta _{kd}}\geqslant 0,\hspace{1em}k,d=1,\dots ,m,\hspace{1em}\\ {} {u_{k}},{v_{k}}\geqslant 0,\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} {z_{kl}}\geqslant 0,\hspace{1em}k=1,2,\dots ,m-1,\hspace{2.5pt}l=k+1,\dots ,m.\hspace{1em}\end{array}\right.\end{aligned}\]

3.2.3 Computational Issues

As we mentioned earlier, solving the generic (OWA-CMCC) model is not computationally tractable in the LSGDM settings due to its inherent NP-hard nature. Therefore, this subsection analyses the computational cost of the different alternative versions proposed earlier. For this purpose, we simulate the GDM problem with a fixed number of experts drawing their opinions from a uniform distribution $[0,1]$ along with the randomly generated OWA weight vector and consensus parameters.
For a fixed number of experts, we create 1000 instances of a GDM problem and solve each instance by invoking the Gurobi Solver. In Table 3, the average run-time (in seconds) for finding an optimal opinion by solving models (OWA-CMCC1-LP) and (OWA-CMCC2-LP) has been reported. It has been observed that the simplified linear programming formulation allows finding the optimal solution of a GDM scenario with 1000 experts in less than a second, while finding the consensus solution for 10 experts with the generic BLP formulation may take hours.
Table 3
Time cost in linearized models.
Consensus measure Models Experts
10 20 50 100 200 500 1000
${\kappa _{1}}$ (CMCC1) 0.0014 0.0017 0.0027 0.0044 0.0083 0.0240 0.0627
(OWA-CMCC1-LP) 0.0015 0.0018 0.0029 0.0049 0.0093 0.0278 0.0746
${\kappa _{2}}$ (CMCC2) 0.0016 0.0032 0.0149 0.0613 0.3000 3.1000 13.6070
(OWA-CMCC2-LP) 0.0019 0.0042 0.0258 0.1746 1.8615 88.0428 5179.2480
Another important fact is that the CMCC model that employs the consensus measure based on the distance among experts always requires much more computational time than the CMCC model with a consensus measure based on the distance between individual experts and collective opinions. That fact is evident in Table 3 where we found the subtle difference in average computational time between the models (OWA-CMCC1-LP) and (OWA-CMCC2-LP). Further, for the large-scale scenario ($\geqslant 500$) the (CMCC2) could be computationally very expensive. We found that in the GDM scenario with 1000 experts, for some instances it may take more than 6 hours to find the optimal solution.
We have also demonstrated the computational gain of (OWA-CMCC-DW) with respect (OWA-CMCC-BLP) through some small-scale experiments. For each GDM instance, we have generated the decreasing weight vector using the ${Q_{\alpha }}(x)={x^{\alpha }}\hspace{2.5pt}\forall \hspace{2.5pt}x\in [0,1](\alpha \gt 1)$ and find the optimal consensual opinions by (OWA-CMCC1-BLP) and (OWA-CMCC1-DW). Table 4 provides the cost and time for each instance. Note that when the solver is not able to find the solution within a computational time budget, we have not reported the optimal cost. One can observe from such a table that in case of decreasing weights the model (OWA-CMCC1-DW) can always find the solution.
Table 4
Time cost between BLP and LP-based models for decreasing OWA weight.
Models Measure Experts
5 7 10 15 20
(OWA-CMCC1-BLP) cost 46.091 41.053 59.677 – –
time 0.043 0.306 79.654 200∗ 400∗
(OWA-CMCC1-DW) cost 46.091 41.053 59.677 67.447 157.974
time 0.003 0.003 0.004 0.006 0.007
Further, we attempt to investigate the time cost that requires to solve (OWA-CMCC1-DW) in the large-scale case. The same protocol has been followed to generate the GDM scenarios with a fixed number of decision-makers. The average time cost for a fixed number is reported in Table 5. We can observe that for considering the generic cost and experts’ weights, the linear programming model (OWA-CMCC1-DW) requires more computational time than (OWA-CMCC1-LP), which considers only equal cost and weights for the decision-makers. The time-cost grows significantly, as evident from Table 5 that for 100 decision-makers it requires on average more than 50 times of corresponding (OWA-CMCC1-LP) time cost.
Table 5
Time cost LP-based models for decreasing OWA weight.
Models Experts
10 20 50 100 200 500 1000
(OWA-CMCC1-DW) 0.0025 0.0058 0.0502 0.2499 2.4461 117.9347 4971.5116
Finally, we analyse the impact of the consensus parameters ϵ and ${\mu _{0}}$ on the computational time of the linear model (OWA-CMCC1-LP). With a similar setup of the experiment-related consensus parameters, we generate the worst-case time-cost for different configurations of the parameters, keeping the decision-makers fixed to 50 (see Table 6). We found that it varies between 0.002 to 0.025, while most configurations require 0.003 seconds.
Table 6
The time-costs with different values of ε and ${\mu _{0}}$ in (OWA-CMCC1-LP).
ε ${\mu _{0}}$
0.70 0.75 0.80 0.85 0.90 0.95
0.30 0.025 0.003 0.003 0.023 0.003 0.003
0.25 0.003 0.003 0.003 0.004 0.003 0.003
0.20 0.002 0.003 0.003 0.003 0.003 0.023
0.15 0.003 0.024 0.003 0.023 0.003 0.003
0.10 0.022 0.003 0.002 0.024 0.003 0.003
0.05 0.023 0.003 0.003 0.022 0.003 0.003

4 The Impact of OWA Operators in Consensus

In this section, we analyse the impact of using the OWA operator in CRPs across different consensus scenarios via CMCC models, which enable us to achieve consensus with minimal modifications or costs, assuming that changing opinions require resources in the negotiation process. By diverse consensus scenarios, we essentially mean that the distribution of experts’ opinions on a particular GDM problem may vary since it is not always uniform and could be polarized, which may alter the final costs and collective decision. Keeping this view in mind, we aim to assess the impact from the following aspects: the effect of different computation methods for the OWA weights and the repercussions of the number of experts considered under the different consensus scenarios. Before delving into the simulation, we briefly introduce the consensus scenarios and the OWA weight generation mechanism.
The diverse consensus scenarios are generated by the varying distribution of the experts’ initial opinions on the GDM problem. Specifically, we generate experts’ opinions according to three perspectives, namely, uniformly distributed, symmetrically polarized and asymmetrically polarized. These three perspectives are modelled through the following distributions:
  • • Uniform: ${f_{u}}:[0,1]\to \mathbb{R}$ ${f_{u}}(x)=1\hspace{2.5pt}\forall \hspace{2.5pt}x\in [0,1]$ (see Fig. 4a),
  • • U-quadratic: ${f_{s}}:[0,1]\to \mathbb{R}$ defined as ${f_{s}}(x)=12{(x-\frac{1}{2})^{2}},x\in [0,1]$ (see Fig. 4b),
  • • Exponential ${f_{a}}:{\mathbb{R}^{+}}\to \mathbb{R}$ defined as ${f_{a}}(x)=\frac{3}{2}{e^{-\frac{3}{2}x}},x\in {\mathbb{R}^{+}}$ (see Fig. 4c).
Generating experts’ opinions from these distributions will allow us to generate three different consensus scenarios.
infor599_g004.jpg
Fig. 4
Probability distributions for generating random opinions.
For weight impact assessment, we consider two broad classes of weighting mechanisms in OWA aggregation: EVR-based weight allocation and LQ-based weight. Specifically, we analyse the impact of eight different RIMQ, namely, the EVRs ${s_{0.08}},{s_{0.15}},{p_{0.5}},$ and ${p_{1}}$, which are convenient for consensual solutions and generating non-zero symmetric weights with varied distribution for different values of α, and the LQs ${Q_{0.5,0.8}}$, which has low entropy and gives non-symmetrically distributed weights, ${Q_{0.1,0.9}}$, which provides symmetric weights, ${Q_{0,0}}$, which corresponds to the minimum operator, and ${Q_{1,1}}$, which generates the maximum operator. By selecting these quantifiers, we can cover the broad spectrum of configurations to generate weighting vectors for OWA operators.
With these setups in mind, we design Monte-Carlo simulation experiments to analyse the impact of OWA on CRPs under three distinct consensus scenarios, each incorporating different weighting mechanisms and group sizes. Specifically, in a GDM problem with a group size $m\in \mathbb{N}$ and a consensus scenario characterized by a probability distribution f, we generate multiple sets of m opinions, where each expert’s opinion is sampled from the distribution f. Afterwards, given a RIMQ $Q:[0,1]\to [0,1]$ from eight above-mentioned RIMQs, the consensus opinions for each one of these simulated GDM problems are computed via (OWA-CMCC1-LP) and (CMCC1) models. We utilize the (CMCC1) model to understand how OWA impacts CRP compared to the arithmetic mean in terms of the minimum cost of reaching consensus and the difference between consensus opinions obtained from both models. To ensure that only the impact of OWA is analysed, we fixed the consensus parameters to $\varepsilon =0.3$ and ${\mu _{0}}=0.8$, with all costs set to 1 for all the models. Note that in general ε and ${\mu _{0}}$ impact the cost of consensus and their complex interaction has been analysed in depth by García-Zamora et al. (2022).
Each of the following subsections presents a detailed analysis of the impact of using different RIMQs, specifically EVRs (García-Zamora et al., 2022) and LQs (Yager, 1996), in the decision-making process. We also examine the performance of OWA operators with varying numbers of experts to assess their suitability for LSGDM contexts. Furthermore, we demonstrate that in certain LSGDM scenarios, the use of EVR-OWA or LQ methods yields results that are statistically similar to those produced by the arithmetic mean when forming group opinions.

4.1 Uniformly Distributed Opinions

In this subsection, we explore the performance of OWA operators in consensus processes where expert opinions are uniformly distributed. Our goal is to compare the efficiency and outcomes of the (OWA-CMCC1-LP) model against the classical (CMCC1) model, which uses the arithmetic mean for aggregation. By using the uniform distribution, we aim to simulate scenarios in which expert opinions are spread evenly across the available decision space. This allows us to observe how different RIMQs, specifically EVRs and LQs, influence the consensus-reaching process when there is no inherent polarization among the experts.
The results of the simulations are shown in Tables 7 and 8, which contain the values of the objective function and the distance between the modified preferences obtained in (CMCC1) and (OWA-CMCC1-LP) for different numbers of experts and RIMQs (noted as Distance between prefs.). These metrics allow us to assess the efficiency of both models in reaching a consensus, as well as the differences between their solutions.
Table 7
Comparing (CMCC1) and (OWA-CMCC1-LP) under EVRs using uniformly distributed opinions in terms of consensus reaching cost and the distance between the obtained consensus opinions.
RIMQ Measured variables Experts
5 10 15 20 50 100 200 500 700 1000
${s_{0.08}}$ Cost (CMCC1) 0.1 0.46 0.67 1.28 2.41 4.82 10.13 24.18 34.23 51.36
Cost (OWA-CMCC1-LP) 0.1 0.47 0.68 1.28 2.41 4.82 10.13 24.18 34.23 51.36
Distance between prefs. 0.01 0.0 0.01 0.02 0.01 0.01 0.02 0.01 0.01 0.02
${s_{0.15}}$ Cost (CMCC1) 0.21 0.38 0.79 0.99 2.23 5.24 9.32 25.16 36.06 49.93
Cost (OWA-CMCC1-LP) 0.22 0.38 0.8 1.01 2.22 5.23 9.32 25.16 36.06 49.92
Distance between prefs. 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.02 0.02 0.02
${p_{0.5}}$ Cost (CMCC1) 0.25 0.45 0.65 1.05 2.24 4.82 10.31 24.26 37.08 48.98
Cost (OWA-CMCC1-LP) 0.26 0.45 0.65 1.05 2.24 4.82 10.31 24.26 37.08 48.98
Distance between prefs. 0.01 0.01 0.01 0.01 0.01 0.01 0.02 0.01 0.02 0.02
${p_{1}}$ Cost (CMCC1) 0.11 0.35 0.77 0.95 2.11 5.63 8.65 24.16 36.49 49.37
Cost (OWA-CMCC1-LP) 0.12 0.36 0.77 0.95 2.11 5.63 8.65 24.15 36.49 49.37
Distance between prefs. 0.0 0.0 0.01 0.01 0.0 0.02 0.01 0.01 0.02 0.02
Table 8
Comparing (CMCC1) and (OWA-CMCC1-LP) under LQs using uniformly distributed opinions in terms of consensus reaching cost and the distance between the obtained consensus opinions.
RIMQ Measured variables Experts
5 10 15 20 50 100 200 500 700 1000
${Q_{0,0}}$ Cost (CMCC1) 0.32 0.43 0.68 0.94 2.55 5.21 9.25 24.92 36.76 51.33
Cost (OWA-CMCC1-LP) 0.59 1.07 1.7 2.26 6.14 12.37 23.89 60.98 86.94 123.58
Distance between prefs. 0.06 0.07 0.08 0.08 0.08 0.09 0.08 0.09 0.09 0.09
${Q_{0.5,0.8}}$ Cost (CMCC1) 0.18 0.41 0.8 0.99 2.65 4.74 8.84 24.95 35.46 50.12
Cost (OWA-CMCC1-LP) 0.21 0.59 1.06 1.27 3.32 6.33 11.56 32.21 45.0 63.7
Distance between prefs. 0.02 0.04 0.06 0.05 0.07 0.06 0.06 0.06 0.07 0.06
${Q_{0.1,0.9}}$ Cost (CMCC1) 0.28 0.56 0.57 0.92 2.56 4.58 9.75 26.3 34.13 51.66
Cost (OWA-CMCC1-LP) 0.28 0.56 0.57 0.92 2.56 4.58 9.75 26.3 34.13 51.66
Distance between prefs. 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.02 0.02 0.02
${Q_{1,1}}$ Cost (CMCC1) 0.26 0.49 0.59 1.08 2.14 4.38 9.57 23.85 34.44 50.73
Cost (OWA-CMCC1-LP) 0.56 1.1 1.54 2.38 5.65 11.66 23.87 60.36 85.77 123.22
Distance between prefs. 0.07 0.08 0.08 0.09 0.08 0.08 0.09 0.09 0.09 0.09
Table 7 shows that when using EVR-OWA operators (such as ${s_{0.08}},{s_{0.15}},{p_{0.5}}$ and ${p_{1}}$), the results from the (OWA-CMCC1-LP) model closely match those of the (CMCC1) model. The maximum distance between modified preferences is minimal (less than 0.02), and the cost function values remain similar as the number of experts increases. This indicates that EVR-OWA operators, which prioritize intermediate values in the aggregation, perform similarly to the arithmetic mean in uniformly distributed settings.
On the other hand, Table 8 demonstrates that the performance of LQs depends significantly on the choice of quantifier. For instance, due to the resemblance between the arithmetic mean and ${Q_{0.1,0.9}}$, (CMCC1) and (OWA-CMCC1-LP) present similar results in terms of cost and distance between modified opinions. However, more extreme LQs like ${Q_{0,0}}$ (minimum operator), or ${Q_{1,1}}$ (maximum operator) lead to markedly different outcomes, especially as the number of experts increases. In any case, these LQs are less suited for achieving consensus in scenarios with evenly distributed opinions, as they disproportionately emphasize the most extreme values. When using ${Q_{0.5,0.8}}$, the data shows that the aggregation of information is slightly different in each model, and the cost difference increases with the number of experts considered.
infor599_g005.jpg
Fig. 5
Absolute consensus cost differences between (OWA-CMCC1-LP) and (CMCC1) using uniformly distributed opinions.
To summarize, in the case of uniformly distributed opinions, the use of EVR-OWA operators implies similar costs for the agreed solutions, even in large-scale scenarios (see Fig. 5), and close distances between preferences. However, for RIMQs that significantly differ from the identity function, which is the one that generates the arithmetic mean, the absolute difference between the obtained costs increases according to the number of experts, and the distances between the adjusted preferences are bigger.
Table 9
Confidence Intervals for the normalized distance between the outputs of (OWA-CMCC1-LP) and (CMCC1) with RIMQs using uniformly distributed opinions.
RIMQ Confidence interval Experts
5 20 100 500 1000
${s_{0.08}}$ LB 0.00457 0.00619 0.00726 0.01533 0.01736
UB 0.01266 0.01434 0.0129 0.01957 0.01975
${s_{0.15}}$ LB 0.00578 0.00673 0.0077 0.01563 0.01756
UB 0.01217 0.01429 0.01315 0.01994 0.01991
${p_{0.5}}$ LB 0.0036 0.00466 0.00689 0.01517 0.01725
UB 0.01169 0.01222 0.01239 0.01943 0.01965
${p_{1}}$ LB 0.00456 0.00568 0.00719 0.01531 0.01746
UB 0.01224 0.01371 0.01256 0.01954 0.01983
${Q_{0,0}}$ LB 0.04895 0.07215 0.0806 0.08714 0.08801
UB 0.06291 0.08395 0.08543 0.08994 0.08959
${Q_{0.5,0.8}}$ LB 0.02063 0.04047 0.05301 0.06179 0.06356
UB 0.03231 0.05262 0.05906 0.06589 0.06576
${Q_{0.1,0.9}}$ LB 0.00372 0.00435 0.00732 0.0156 0.01733
UB 0.01156 0.0145 0.01251 0.01986 0.01975
${Q_{1,1}}$ LB 0.04906 0.07185 0.08062 0.08715 0.08801
UB 0.06315 0.08325 0.08548 0.08994 0.08959
To validate our conclusion about the behaviour of the different OWA weights generated through linguistic quantifiers in LSGDM against arithmetic mean, we attempt to compute some statistical measures by generating the LSGDM scenarios randomly. In particular, the opinions of experts are generated randomly following the specified distribution and then, the modified consensus opinions are obtained via models (CMCC1) and (OWA-CMCC1-LP) to measure the difference between them. Specifically, we use mean absolute distance to measure the distance between consensus opinions generated from two models. For each LSGDM scenario, we repeat this process 50 times and report Lower bound of the 95% confidence interval (LB CI) and upper bound of the 95% Confidence interval (UB CI) in Table 9 for the mean absolute distance. It is evident from Table 9 that when EVR-OWA is used to form the group opinion, the distance between consensus opinions obtained by (CMCC1) and (OWA-CMCC1-LP) remains below 0.02 even for a very large group. Further, changes of EVRs to generate the weight for OWA does not make any significant difference with many participants in the group. Thus, in the case of LSGDM, the use of arithmetic mean or EVR-OWA to form the group opinion would produce almost similar consensus opinions under CMCC model.
Analogous to our earlier set-up, we simulate the LSGDM scenarios to collect the statistical measures (Table 9) when LQs are used to generate the OWA weight to form the group opinions. Specifically, we try to figure out how much it deviates from the arithmetic mean based consensus models, as our primary result suggests that it deviates more than the case of EVR-OWA in certain cases. Table 9 suggests that when we use extreme LQs such as ${Q_{0,0}}$ and ${Q_{1,1}}$, the distance between generated consensus opinions increases. In fact, it is almost 5 times more than the case of EVR-OWA. However, in the case of LQs such as ${Q_{0.1,0.9}}$ which take into account most of the values in OWA aggregation process, the obtained consensus opinions are almost similar to the consensus opinions obtained by using the arithmetic mean and their distance remains less than 0.02 even in the case of a very large group. Therefore, the LQs that have orness and entropy similar to the arithmetic mean would generate almost similar consensus opinions as the arithmetic mean under CMCC model.

4.2 Symmetrically Polarized Opinions

This subsection examines the performance of OWA operators in consensus scenarios where expert opinions are symmetrically polarized using the U-quadratic distribution. In this scenario, expert opinions are concentrated around the extremes, with fewer experts holding moderate views.
The experimental results shown in Tables 10 and 11, indicate that consensus cost increases significantly compared to the uniformly distributed scenario, especially in LSGDM settings for both EVR-OWA and LQs. This fact is aligned with the consensus scenario that more polarization in initial opinions may require more resources in the negotiation process to achieve consensus. However, the difference between the obtained consensus opinions from (CMCC1) and (OWA-CMCC1-LP) remains the same as in uniform scenario under EVR-OWA.
Table 10
Comparing (CMCC1) and (OWA-CMCC1-LP) under EVRs using symmetrically polarized opinions in terms of consensus reaching cost and the distance between the obtained consensus opinions.
RIMQ Measured variables Experts
5 10 15 20 50 100 200 500 700 1000
${s_{0.08}}$ Cost (CMCC1) 0.63 1.15 1.93 2.74 7.78 16.16 34.37 85.03 119.15 170.49
Cost (OWA-CMCC1-LP) 0.63 1.16 1.95 2.76 7.75 16.16 34.37 85.03 119.14 170.49
Distance between prefs. 0.01 0.02 0.03 0.03 0.05 0.06 0.08 0.07 0.08 0.08
${s_{0.15}}$ Cost (CMCC1) 0.47 1.28 2.12 3.23 8.12 15.8 33.51 84.94 121.47 173.39
Cost (OWA-CMCC1-LP) 0.47 1.3 2.13 3.21 8.08 15.78 33.5 84.92 121.47 173.38
Distance between prefs. 0.01 0.02 0.03 0.05 0.04 0.05 0.07 0.08 0.08 0.09
${p_{0.5}}$ Cost (CMCC1) 0.49 1.41 1.94 2.87 7.84 16.06 34.06 84.9 118.5 170.22
Cost (OWA-CMCC1-LP) 0.5 1.42 1.93 2.87 7.83 16.04 34.05 84.9 118.49 170.21
Distance between prefs. 0.01 0.01 0.03 0.03 0.05 0.06 0.06 0.07 0.07 0.07
${p_{1}}$ Cost (CMCC1) 0.5 1.22 1.85 2.72 7.88 15.86 33.49 84.21 119.35 172.19
Cost (OWA-CMCC1-LP) 0.51 1.24 1.88 2.72 7.84 15.83 33.46 84.21 119.35 172.19
Distance between prefs. 0.01 0.02 0.02 0.05 0.04 0.05 0.07 0.07 0.08 0.08
Table 11
Comparing (CMCC1) and (OWA-CMCC1-LP) under LQs using symmetrically polarized opinions in terms of consensus reaching cost and the distance between the obtained consensus opinions.
RIMQ Measured variables Experts
5 10 15 20 50 100 200 500 700 1000
${Q_{0,0}}$ Cost (CMCC1) 0.5 1.16 1.81 2.72 7.54 16.57 33.64 85.02 120.03 172.75
Cost (OWA-CMCC1-LP) 0.73 1.74 2.67 3.9 10.67 22.28 44.6 112.32 157.65 225.86
Distance between prefs. 0.05 0.09 0.07 0.1 0.11 0.12 0.14 0.13 0.14 0.14
${Q_{0.5,0.8}}$ Cost (CMCC1) 0.56 1.12 2.1 2.87 7.79 16.53 33.82 85.48 119.64 173.04
Cost (OWA-CMCC1-LP) 0.66 1.25 2.35 3.12 8.26 17.04 34.51 85.92 120.02 173.96
Distance between prefs. 0.02 0.06 0.09 0.08 0.1 0.12 0.1 0.11 0.11 0.11
${Q_{0.1,0.9}}$ Cost (CMCC1) 0.6 1.15 2.29 2.91 8.13 16.28 33.94 84.74 121.55 171.28
Cost (OWA-CMCC1-LP) 0.6 1.16 2.29 2.91 8.12 16.28 33.94 84.74 121.55 171.28
Distance between prefs. 0.02 0.01 0.04 0.03 0.06 0.09 0.07 0.08 0.09 0.08
${Q_{1,1}}$ Cost (CMCC1) 0.5 1.13 2.04 3.05 7.81 16.13 34.03 84.45 120.58 169.64
Cost (OWA-CMCC1-LP) 0.77 1.59 2.93 4.22 10.68 21.95 45.09 111.76 157.57 224.21
Distance between prefs. 0.06 0.12 0.1 0.13 0.12 0.12 0.13 0.13 0.14 0.13
Regarding LQs, Table 11 shows that ${Q_{0.1,0.9}}$ behaves similarly to EVR-OWA operators under symmetrically polarized opinions. In the other cases, we observe that the absolute consensus cost difference between (CMCC1) and (OWA-CMCCC1-LP) becomes higher the more the quantifier deviates from the identity function (Fig. 6), whereas the distance between preferences obtained from (CMCC1) and (OWA-CMCC1-LP) is higher than in the case of uniformly distributed opinions.
infor599_g006.jpg
Fig. 6
Absolute consensus cost differences between (OWA-CMCC1-LP) and (CMCC1) using symmetrically polarized opinions.
As the initial experiments suggest, for symmetrically polarized opinions, the consensus opinions generated via (CMCC1) and (OWA-CMCC1-LP) under EVR-OWA have greater distance compared to the case of uniform opinions. To examine it in depth, we further simulate GDM scenarios and compute the corresponding confidence intervals for the mean absolute distance between preferences generated from the both models (see Table 14), which reaffirm that the distances between the adjusted consensus opinions by using EVR-OWA and the arithmetic mean are greater than in the uniform case. In fact, the distance is almost 4.5 times more than the uniform case. But it remains less than 0.095, even with 1000 experts.
We further conduct a similar simulation for the case of LQs and report the confidence intervals for mean absolute distance between consensus opinions generated from two models in Table 14). We found that the adjusted opinions obtained when considering the extreme LQs, ${Q_{0,0}}$ and ${Q_{1,1}}$ produce similar distances to the arithmetic mean, which are also greater than when using EVRs. However, in the case of the LQ ${Q_{0.1,0.9}}$, with orness and entropy close to the arithmetic mean, it behaves similarly to the EVRs. Therefore, in the case of symmetrically polarized opinions, the use of EVRs and similar LQs produces almost similar adjusted opinions. Additionally, the adjusted opinions obtained in (MCC) and (OWA-MCC) by these RIMQs are no further than 0.1, on average.
The analysis of symmetrically polarized opinions reveals that EVR-OWA operators maintain a close alignment with the arithmetic mean. LQ-based operators with an orness measure close to 0.5 also perform well in this context. However, the use of more extreme LQs introduces greater discrepancies in the results, indicating that the OWA aggregation may be far from the arithmetic mean in such cases. This differences become higher in LSGDM scenarios.

4.3 Asymmetrically Polarized Opinions

In this scenario, experts’ opinions are polarized toward one extreme of the numerical scale. We conduct similar simulation experiments and results are reported in Table 12. The results depict that achieving consensus via the models CCMC1 and (OWA-CMCC1-LP) requires more resources (higher consensus cost) in the negotiation process than uniform and symmetrically polarized cases in respective weighting mechanisms EVR or LQs. We further observe that using EVR-OWA in the (OWA-CMCC1-LP) model requires similar consensus resources as the (CMCC1) model needs, even in large-scale cases irrespective of quantifiers and the distance between generated consensus opinions are less than symmetric polarized opinions.
Table 12
Comparing (CMCC1) and (OWA-CMCC1-LP) under EVRs using asymmetrically polarized opinions in terms of consensus reaching cost and the distance between the obtained consensus opinions.
RIMQ Measured variables Experts
5 10 15 20 50 100 200 500 700 1000
${s_{0.08}}$ Cost (CMCC1) 1.04 2.48 3.69 5.78 14.53 26.65 51.19 135.75 183.89 261.64
Cost (OWA-CMCC1-LP) 1.04 2.48 3.69 5.77 14.52 26.64 51.19 135.75 183.89 261.64
Distance between prefs. 0.01 0.04 0.02 0.04 0.03 0.03 0.03 0.05 0.05 0.05
${s_{0.15}}$ Cost (CMCC1) 0.53 3.16 4.79 4.86 11.9 28.06 50.31 129.09 185.31 256.71
Cost (OWA-CMCC1-LP) 0.55 3.17 4.79 4.86 11.89 28.06 50.3 129.09 185.31 256.71
Distance between prefs. 0.01 0.09 0.03 0.02 0.03 0.04 0.04 0.04 0.05 0.05
${p_{0.5}}$ Cost (CMCC1) 0.94 2.74 3.72 4.26 13.03 25.95 50.91 131.88 184.14 261.01
Cost (OWA-CMCC1-LP) 0.94 2.74 3.72 4.26 13.03 25.95 50.91 131.88 184.14 261.01
Distance between prefs. 0.0 0.08 0.03 0.04 0.03 0.03 0.03 0.04 0.05 0.05
${p_{1}}$ Cost (CMCC1) 1.52 2.56 3.08 4.89 12.67 27.5 52.46 129.0 179.08 258.13
Cost (OWA-CMCC1-LP) 1.52 2.57 3.08 4.89 12.67 27.5 52.46 129.0 179.08 258.13
Distance between prefs. 0.0 0.02 0.02 0.04 0.03 0.03 0.03 0.03 0.03 0.04
The difference between consensus resources required by the (OWA-CMCC1-LP) and CCMC1 grows significantly in the case of extreme LQs (${Q_{0,0}}$ and ${Q_{1,1}}$) (see Fig. 7) and this difference is more than symmetrically polarized opinions. Further ${Q_{0.1,0.9}}$ behaves quite similarly to the arithmetic mean and produces lower distance between generated consensus preference from the the both models. Overall the distance between the generated preferences from these models is lower than in the case of symmetric polarized opinions.
Table 13
Comparing (CMCC1) and (OWA-CMCC1-LP) under LQs using asymmetrically polarized opinions in terms of consensus reaching cost and the distance between the obtained consensus opinions.
RIMQ Measured variables Experts
5 10 15 20 50 100 200 500 700 1000
${Q_{0,0}}$ Cost (CMCC1) 1.01 1.99 4.75 5.03 11.99 28.23 51.27 134.1 186.8 267.93
Cost (OWA-CMCC1-LP) 1.34 2.62 5.78 6.31 15.43 34.82 64.78 167.47 233.77 334.63
Distance between prefs. 0.07 0.09 0.11 0.1 0.11 0.11 0.11 0.11 0.11 0.11
${Q_{0.5,0.8}}$ Cost (CMCC1) 1.1 1.58 3.93 5.15 13.51 27.75 54.58 128.84 182.83 262.53
Cost (OWA-CMCC1-LP) 1.2 1.76 4.2 5.46 14.26 29.07 56.32 133.99 189.32 270.69
Distance between prefs. 0.04 0.06 0.07 0.07 0.08 0.08 0.08 0.08 0.08 0.08
${Q_{0.1,0.9}}$ Cost (CMCC1) 0.83 1.79 3.63 4.56 12.75 24.34 50.9 126.01 192.61 253.49
Cost (OWA-CMCC1-LP) 0.83 1.79 3.63 4.56 12.75 24.34 50.9 126.01 192.61 253.49
Distance between prefs. 0.01 0.04 0.02 0.03 0.04 0.05 0.04 0.04 0.04 0.04
${Q_{1,1}}$ Cost (CMCC1) 0.51 1.78 3.67 5.6 12.29 22.97 54.66 135.14 179.7 261.25
Cost (OWA-CMCC1-LP) 0.78 2.37 4.65 6.93 15.73 29.83 67.86 168.59 226.31 328.08
Distance between prefs. 0.07 0.1 0.09 0.11 0.11 0.11 0.11 0.11 0.11 0.11
infor599_g007.jpg
Fig. 7
Absolute consensus cost differences between (OWA-CMCC1-LP) and (CMCC1) with asymmetrically polarized opinions.
To reinforce our primary observation that EVR-OWA behaves almost similarly to the arithmetic mean even when the initial opinions are asymmetrically distributed, we conducted the analogous simulation study and reported the corresponding confidence intervals in Table 15 (see Appendix). We found that regardless of the quantifiers used to generate the EVR-OWA weights, the normalized distances between opinions generated by (CMCC) and (OWA-CMCC) remain lower than 0.06, which indicates that they produce almost similar opinions. However, they deviate slightly more than in the case of uniform opinions, but fewer than for symmetrically distributed opinions.
The initial observation suggests that the LQs-based weighting mechanism for OWA produces consensus opinions that deviate more from the arithmetic mean than EVR-OWA when experts’ initial opinions are asymmetrically distributed. The confidence intervals reported in Table 15 (see Appendix) via simulation also reaffirm this initial hypothesis. We found that when extreme quantifiers (${Q_{0,0}}$ and ${Q_{1,1}}$) were used to generate the OWA weights, the distance between the obtained adjusted opinions almost doubled in comparison to corresponding EVR-OWA-based cases. However, LQs with orness and entropy close to the arithmetic mean behave similarly to EVRs.

5 Conclusions

This study provides both theoretical and practical advancements in GDM and its CRPs. Theoretically, we have extended the MCC framework by incorporating OWA operators with classical consensus measures, resulting in the OWA-based CMCC model. This integration allows for a more flexible and interpretable aggregation process, accommodating diverse expert opinions while ensuring an optimal trade-off between consensus quality and opinion modification cost. Practically, we have introduced computationally efficient formulations of the OWA-based CMCC model, making it scalable for LSGDM applications, where traditional methods struggle with NP-hard optimization complexities (RQ1). Additionally, our analysis of different weight allocation methods revealed that OWA-CMCC models, when the OWA operator is tuned with entropy and orness values close to those of the arithmetic mean, produced results comparable to the classic CMCC model, which uses the arithmetic mean for aggregation, even in LSGDM. However, as entropy and orness values deviate further from those of the arithmetic mean, the consensus outcomes differs significantly (RQ2, RQ3).
The findings suggest that the use of OWA operators for calculating collective opinions in CRPs can be contentious if the weights are not properly tuned. In this sense, EVR-OWA operators, which offer unbiased aggregations without significant information loss, provide consensus results similar to those produced by the arithmetic mean. On the contrary, using more extreme OWA weights, such as those associated with the minimum or maximum operators, undermines the consensus process by overemphasizing the most extreme opinions, making the aggregation unfit for achieving meaningful consensus.
Regarding the limitations of our proposal, we would like to highlight the computational complexity associated with solving the optimization problem, especially when dealing with a very large number of experts (millions). While our reformulated versions improve efficiency compared to traditional OWA-based MCC models, further scalability enhancements, such as heuristic approaches or parallel computing techniques, could be explored in future research. Additionally, we aim to investigate other aggregation operators that have been applied in GDM but have not yet been adequately tested for their impact in these specific decision scenarios.

A Appendix

Table 14
Confidence Intervals for the normalized distance between the outputs of models (OWA-CMCC1-LP) and (CMCC1) with RIMQs using symmetrically distributed opinions.
RIMQ Confidence interval Experts
5 20 100 500 1000
${s_{0.08}}$ LB 0.00699 0.02321 0.0418 0.07826 0.08312
UB 0.0135 0.03918 0.05442 0.09289 0.09246
${s_{0.15}}$ LB 0.00985 0.02538 0.04292 0.07941 0.08484
UB 0.01509 0.0496 0.05491 0.09456 0.09436
${p_{0.5}}$ LB 0.0052 0.02164 0.04118 0.0774 0.08308
UB 0.01267 0.03716 0.0541 0.0891 0.09284
${p_{1}}$ LB 0.00724 0.02347 0.04153 0.0798 0.08478
UB 0.01366 0.04018 0.05404 0.09137 0.09381
${Q_{0,0}}$ LB 0.04749 0.08615 0.11216 0.13223 0.13616
UB 0.05948 0.10231 0.1216 0.13709 0.13857
${Q_{0.5,0.8}}$ LB 0.02322 0.06444 0.09513 0.10697 0.11081
UB 0.032 0.08674 0.10672 0.1141 0.11599
${Q_{0.1,0.9}}$ LB 0.00557 0.02207 0.04229 0.07677 0.08342
UB 0.01262 0.03787 0.05506 0.08763 0.09256
${Q_{1,1}}$ LB 0.04832 0.08642 0.11178 0.13226 0.13619
UB 0.06079 0.10828 0.12097 0.13717 0.1386
Table 15
Confidence Intervals for the normalized distance between the outputs of models (OWA-CMCC1-LP) and (CMCC1) with RIMQs using asymmetrically distributed opinions.
RIMQ Confidence measure Experts
5 20 100 500 1000
${s_{0.08}}$ Mean 0.01057 0.02883 0.03168 0.04716 0.05275
Std 0.0149 0.03437 0.01491 0.00863 0.00523
LB CI 0.00634 0.01907 0.02745 0.04471 0.05126
UB CI 0.01481 0.0386 0.03592 0.04962 0.05424
${s_{0.15}}$ Mean 0.01154 0.02712 0.03205 0.04011 0.05212
Std 0.01218 0.02542 0.01385 0.00893 0.00524
LB CI 0.00808 0.0199 0.02811 0.03758 0.05063
UB CI 0.015 0.03435 0.03598 0.04265 0.05361
${p_{0.5}}$ Mean 0.009 0.02857 0.02948 0.03889 0.0477
Std 0.01496 0.03133 0.01499 0.00878 0.00513
LB CI 0.00475 0.01966 0.02522 0.03639 0.04624
UB CI 0.01325 0.03747 0.03374 0.04138 0.04916
${p_{1}}$ Mean 0.00949 0.03093 0.02895 0.03476 0.04489
Std 0.01404 0.03349 0.01397 0.00744 0.00666
LB CI 0.0055 0.02141 0.02498 0.03265 0.043
UB CI 0.01348 0.04044 0.03292 0.03688 0.04679

Acknowledgements

The authors would like to thank the anonymous reviewers for their valuable comments and suggestions, which have led to an improved version of the current manuscript.

Footnotes

1 Time-limit allows setting a bound for the maximum total time that the solver may spend to find a solution. It is implemented by setting Gurobi solver parameter TimeLimit to a given time in seconds.

References

 
Beliakov, G., Bustince, H., Calvo, T. (2016). A Practical Guide to Averaging Functions. Springer.
 
Ben-Arieh, D., Easton, T. (2007). Multi-criteria group consensus under linear cost opinion elasticity. Decision Support Systems, 43(3), 713–721.
 
Bezanson, J., Edelman, A., Karpinski, S., Shah, V. (2017). Julia: a fresh approach to numerical computing. SIAM Review, 59(1), 65–98.
 
Butler, C.T.L., Rothstein, A. (2006). On Conflict and Consensus: A Handbook on Formal Consensus Decision Making. Takoma Park.
 
Chen, Z.S., Yang, L.L., Chin, K.S., Yang, Y., Pedrycz, W., Chang, J.P., Martínez, L., Skibniewski, M.J. (2021). Sustainable building material selection: an integrated multi-criteria large group decision making framework. Applied Soft Computing, 113, 107903.
 
Dunning, I., Huchette, J., Lubin, M. (2017). JuMP: a modeling language for mathematical optimization. SIAM Review, 59(2), 295–320.
 
Galand, L., Spanjaard, O. (2012). Exact algorithms for OWA-optimization in multiobjective spanning tree problems. Computers & Operations Research, 39(7), 1540–1554.
 
García-Zamora, D., Labella, Á., Rodríguez, R.M., Martínez, L. (2021). Nonlinear preferences in group decision-making. Extreme values amplifications and extreme values reductions. International Journal of Intelligent Systems, 36(11), 6581–6612.
 
García-Zamora, D., Dutta, B., Massanet, S., Riera, J.V., Martínez, L. (2022). Relationship between the distance consensus and the consensus degree in Comprehensive Minimum Cost Consensus models: a polytope-based analysis. European Journal of Operational Research. 306(2), 764–776.
 
García-Zamora, D., Dutta, B., Figueira, J.R., Martínez, L. (2024). The deck of cards method to build interpretable fuzzy sets in decision-making. European Journal of Operational Research, 319(1), 246–262.
 
García-Zamora, D., Dutta, B., Jin, L., Chen, Z.-S., Martínez, L. (2025). A data-driven large-scale group decision-making framework for managing ratings and text reviews. Expert Systems with Applications, 263, 125726.
 
García-Zamora, D., Labella, Á., Rodríguez, R.M., Martínez, L. (2022). Symmetric weights for OWA operators prioritizing intermediate values. The EVR-OWA operator. Information Sciences, 584, 583–602.
 
García-Zamora, D., Dutta, B., Labella, A., Martínez, L. (2023). A Fuzzy-set based formulation for minimum cost consensus models. Computers & Industrial Engineering, 181, 109295. https://www.sciencedirect.com/science/article/pii/S0360835223003194.
 
Guo, L., Zhan, J., Kou, G. (2024). Consensus reaching process using personalized modification rules in large-scale group decision-making. Information Fusion, 103, 102138.
 
Gurobi Optimization, LLC (2022). Gurobi Optimizer Reference Manual. https://www.gurobi.com.
 
Herrera-Viedma, E., Herrera, F., Chiclana, F. (2002). A consensus model for multiperson decision making with different preference structures. Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on, 32, 394–402.
 
Labella, Á., Liu, H., Rodríguez, R.M., Martínez, L. (2020). A cost consensus metric for Consensus Reaching Processes based on a comprehensive minimum cost model. European Journal of Operational Research, 281, 316–331.
 
Liu, Y., Li, Y., Liang, H., Dong, Y. (2023). Strategic experts’ weight manipulation in 2-rank consensus reaching in group decision making. Expert Systems with Applications, 216, 119432.
 
Palomares, I., Martínez, L., Herrera, F. (2014). A consensus model to detect and manage noncooperative behaviors in large-scale group decision making. IEEE Transactions on Fuzzy Systems, 22(3), 516–530.
 
Qu, S., Wei, J., Wang, Q., Li, Y., Jin, X., Chaib, L. (2023). Robust minimum cost consensus models with various individual preference scenarios under unit adjustment cost uncertainty. Information Fusion, 89, 510–526.
 
Sirbiladze, G. (2021a). New view of fuzzy aggregations. Part I: general information structure for decision-making models. Journal of Fuzzy Extension and Applications, 2(2), 130–143.
 
Sirbiladze, G. (2021b). New view of fuzzy aggregations. Part III: extensions of the FPOWA operator in the problem of political management. Journal of Fuzzy Extension and Applications, 2(4), 321–333.
 
Wang, Y.-M., Song, H.-H., Dutta, B., García-Zamora, D., Martínez, L. (2024). Consensus reaching in LSGDM: Overlapping community detection and bounded confidence-driven feedback mechanism. Information Sciences, 679, 121104.
 
Xu, W., Chen, X., Dong, Y., Chiclana, F. (2021). Impact of decision rules and non-cooperative behaviors on minimum consensus cost in group decision making. Group Decision and Negotiation, 30, 1239–1260.
 
Yager, R. (1988). On ordered weighted averaging aggregation operators in multicriteria decision making. IEEE Transactions on Systems, Man, and Cybernetics, 18(1), 183–190.
 
Yager, R. (1993). Families of OWA operators. Fuzzy Sets and Systems, 59(2), 125–148.
 
Yager, R. (1996). Quantifier guided aggregation using OWA operators. International Journal of Intelligent Systems, 11(1), 49–73.
 
Zhang, B., Dong, Y., Xu, Y. (2013). Maximum expert consensus models with linear cost function and aggregation operators. Computers & Industrial Engineering, 66(1), 147–157.
 
Zhang, G., Dong, Y., Xu, Y., Li, H. (2011). Minimum-cost consensus models under aggregation operators. IEEE Transactions on Systems, Man and Cybernetics-Part A: Systems and Humans, 41(6), 1253–1261.
 
Zhang, R., Huang, J., Xu, Y., Herrera-Viedma, E. (2022). Consensus models with aggregation operators for minimum quadratic cost in group decision making. Applied Intelligence, 53, 1–21.

Biographies

García-Zamora Diego
dgzamora@ujaen.es

D. García-Zamora obtained a bachelor’s degree in mathematics from the University of Granada in 2016. In addition, he received a master’s degree in physics and mathematics in 2017 and a master’s degree in education in 2018, both from the University of Granada. He received his PhD in information and communication technologies at the University of Jaén in 2023. Nowadays, he is an assistant professor at the University of Jaén and focuses his research on fuzzy sets, decision-making, and optimization. He obtained the contract Formacion de Profesorado Universitario, granted by the Spanish Ministry of Science, Innovation, and Universities. He has published works in high-quality journals such as Fuzzy Sets and Systems, European Journal of Operational Research, and IEEE Transactions on Fuzzy Systems. Furthermore, he has participated in several international conferences, obtaining the Best Student Paper Award at the 16th International Conference on Intelligent Systems and Knowledge Engineering (ISKE 2021) and at the 15th International FLINS Conference on Machine Learning, Multi-agent and Cyber physical Systems (FLINS 2022), and the Best Paper Award at the 18th International Conference on Intelligent Systems and Knowledge Engineering (ISKE2023).

Dutta Bapi
bdutta@ujaen.es

B. Dutta is a Ramón y Cajal researcher at the University of Jaén. His research focuses on decision-making, soft computing, simulation, optimization, and machine learning, with particular emphasis on developing advanced computational models and methodologies to address complex real-world problems.

Labella Álvaro
alabella@ujaen.es

A. Labella holds a PhD in computer science (2021) from the University of Jaén, Spain, where he currently works as an associate professor and researcher in the SINBAD2 group. His research focuses on decision-making, computing with words, and software application development. He has authored 34 journal articles, 4 book chapters, and 46 conference contributions. He is involved in two research networks (AIDA and the Spanish Thematic Network on Recommender Systems) and has participated in 3 funded research projects. He has completed research stays in Greece, the UK, and Northern Ireland, the latter supported by a José Castillejo postdoctoral fellowship. He has co-developed 3 decision support systems with registered intellectual property and has organized several Special Issues and conference sessions. His work has earned him multiple awards, including the Ada Lovelace Award (2019) and Best Paper Awards at international conferences held in Istanbul, Dalian, and Fuzhou.

Martínez Luis
martin@ujaen.es

L. Martínez (senior member, IEEE) is a full professor with the Department of Computer Science, University of Jaén, Spain. He is also a visiting professor with the University of Technology Sydney, the University of Portsmouth (Isambard Kingdom Brunel Fellowship Scheme), and Wuhan University of Technology (Chutian Scholar). He has been the leading researcher in 16 research and development projects, published more than 190 papers in journals indexed by SCI, and made more than 200 contributions to international/national conferences related to his areas. His research interests include multi-criterion decision-making, fuzzy logic-based systems, computing with words, and recommender systems. He is an IFSA Fellow, in 2021 and a senior member of European Society for Fuzzy Logic and Technology. He was a Recipient of the IEEE Transactions on Fuzzy Systems Outstanding Paper Award, in 2008 and 2012 (bestowed in 2011 and 2015, respectively). He was classified as a Highly Cited Researcher 2017–2021 in computer sciences. He is the co-editor-in-chief of International Journal of Computational Intelligence Systems and an associate editor of Information Sciences, Knowledge-Based Systems, and Information Fusion.


Reading mode PDF XML

Table of contents
  • 1 Introduction
  • 2 Preliminaries
  • 3 OWA Operators in CMCC Models
  • 4 The Impact of OWA Operators in Consensus
  • 5 Conclusions
  • A Appendix
  • Acknowledgements
  • Footnotes
  • References
  • Biographies

Copyright
© 2025 Vilnius University
by logo by logo
Open access article under the CC BY license.

Keywords
Minimum Cost Consensus model OWA operators Large-Scale Group Decision-Making consensus reaching

Funding
Bapi Dutta acknowledges the support of the Spanish Ministry of Science, Innovation and Universities, and the Spanish State Research Agency through the Ramón y Cajal Research grant (RYC2023-045020-I), Spain.

Metrics
since January 2020
1746

Article info
views

465

Full article
views

1709

PDF
downloads

1650

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Figures
    7
  • Tables
    15
  • Theorems
    6
infor599_g001.jpg
Fig. 1
GDM resolution scheme.
infor599_g002.jpg
Fig. 2
Feedback vs. Automatic CRP.
infor599_g003.jpg
Fig. 3
Sketch of an EVR.
infor599_g004.jpg
Fig. 4
Probability distributions for generating random opinions.
infor599_g005.jpg
Fig. 5
Absolute consensus cost differences between (OWA-CMCC1-LP) and (CMCC1) using uniformly distributed opinions.
infor599_g006.jpg
Fig. 6
Absolute consensus cost differences between (OWA-CMCC1-LP) and (CMCC1) using symmetrically polarized opinions.
infor599_g007.jpg
Fig. 7
Absolute consensus cost differences between (OWA-CMCC1-LP) and (CMCC1) with asymmetrically polarized opinions.
Table 1
Time cost BLP-based models.
Table 2
The time costs for different values of ε and ${\mu _{0}}$ in (OWA-CMCC1-BLP).
Table 3
Time cost in linearized models.
Table 4
Time cost between BLP and LP-based models for decreasing OWA weight.
Table 5
Time cost LP-based models for decreasing OWA weight.
Table 6
The time-costs with different values of ε and ${\mu _{0}}$ in (OWA-CMCC1-LP).
Table 7
Comparing (CMCC1) and (OWA-CMCC1-LP) under EVRs using uniformly distributed opinions in terms of consensus reaching cost and the distance between the obtained consensus opinions.
Table 8
Comparing (CMCC1) and (OWA-CMCC1-LP) under LQs using uniformly distributed opinions in terms of consensus reaching cost and the distance between the obtained consensus opinions.
Table 9
Confidence Intervals for the normalized distance between the outputs of (OWA-CMCC1-LP) and (CMCC1) with RIMQs using uniformly distributed opinions.
Table 10
Comparing (CMCC1) and (OWA-CMCC1-LP) under EVRs using symmetrically polarized opinions in terms of consensus reaching cost and the distance between the obtained consensus opinions.
Table 11
Comparing (CMCC1) and (OWA-CMCC1-LP) under LQs using symmetrically polarized opinions in terms of consensus reaching cost and the distance between the obtained consensus opinions.
Table 12
Comparing (CMCC1) and (OWA-CMCC1-LP) under EVRs using asymmetrically polarized opinions in terms of consensus reaching cost and the distance between the obtained consensus opinions.
Table 13
Comparing (CMCC1) and (OWA-CMCC1-LP) under LQs using asymmetrically polarized opinions in terms of consensus reaching cost and the distance between the obtained consensus opinions.
Table 14
Confidence Intervals for the normalized distance between the outputs of models (OWA-CMCC1-LP) and (CMCC1) with RIMQs using symmetrically distributed opinions.
Table 15
Confidence Intervals for the normalized distance between the outputs of models (OWA-CMCC1-LP) and (CMCC1) with RIMQs using asymmetrically distributed opinions.
Theorem 1.
Theorem 2.
Theorem 3.
Theorem 4.
Theorem 5.
Theorem 6.
infor599_g001.jpg
Fig. 1
GDM resolution scheme.
infor599_g002.jpg
Fig. 2
Feedback vs. Automatic CRP.
infor599_g003.jpg
Fig. 3
Sketch of an EVR.
infor599_g004.jpg
Fig. 4
Probability distributions for generating random opinions.
infor599_g005.jpg
Fig. 5
Absolute consensus cost differences between (OWA-CMCC1-LP) and (CMCC1) using uniformly distributed opinions.
infor599_g006.jpg
Fig. 6
Absolute consensus cost differences between (OWA-CMCC1-LP) and (CMCC1) using symmetrically polarized opinions.
infor599_g007.jpg
Fig. 7
Absolute consensus cost differences between (OWA-CMCC1-LP) and (CMCC1) with asymmetrically polarized opinions.
Table 1
Time cost BLP-based models.
Consensus measure Models Experts
5 10 15 20
${\kappa _{1}}$ (CMCC1) 0.016 0.029 0.032 0.035
(OWA-CMCC1-BLP) 0.031 ${180^{\ast }}$ ${660^{\ast }}$ ${1500^{\ast }}$
${\kappa _{2}}$ (CMCC2) 0.017 0.031 0.034 0.036
(OWA-CMCC2-BLP) 0.037 ${180^{\ast }}$ ${660^{\ast }}$ ${1500^{\ast }}$
Table 2
The time costs for different values of ε and ${\mu _{0}}$ in (OWA-CMCC1-BLP).
ε ${\mu _{0}}$
0.70 0.75 0.80 0.85 0.90 0.95
0.30 0.031 0.032 0.019 0.032 0.063 0.087
0.25 0.034 0.033 0.078 0.032 0.063 0.041
0.20 0.031 0.063 0.031 0.032 0.031 0.063
0.15 0.033 0.031 0.032 0.033 0.033 0.032
0.10 0.046 0.033 0.047 0.034 0.047 0.047
0.05 0.032 0.033 0.047 0.032 0.049 0.034
Table 3
Time cost in linearized models.
Consensus measure Models Experts
10 20 50 100 200 500 1000
${\kappa _{1}}$ (CMCC1) 0.0014 0.0017 0.0027 0.0044 0.0083 0.0240 0.0627
(OWA-CMCC1-LP) 0.0015 0.0018 0.0029 0.0049 0.0093 0.0278 0.0746
${\kappa _{2}}$ (CMCC2) 0.0016 0.0032 0.0149 0.0613 0.3000 3.1000 13.6070
(OWA-CMCC2-LP) 0.0019 0.0042 0.0258 0.1746 1.8615 88.0428 5179.2480
Table 4
Time cost between BLP and LP-based models for decreasing OWA weight.
Models Measure Experts
5 7 10 15 20
(OWA-CMCC1-BLP) cost 46.091 41.053 59.677 – –
time 0.043 0.306 79.654 200∗ 400∗
(OWA-CMCC1-DW) cost 46.091 41.053 59.677 67.447 157.974
time 0.003 0.003 0.004 0.006 0.007
Table 5
Time cost LP-based models for decreasing OWA weight.
Models Experts
10 20 50 100 200 500 1000
(OWA-CMCC1-DW) 0.0025 0.0058 0.0502 0.2499 2.4461 117.9347 4971.5116
Table 6
The time-costs with different values of ε and ${\mu _{0}}$ in (OWA-CMCC1-LP).
ε ${\mu _{0}}$
0.70 0.75 0.80 0.85 0.90 0.95
0.30 0.025 0.003 0.003 0.023 0.003 0.003
0.25 0.003 0.003 0.003 0.004 0.003 0.003
0.20 0.002 0.003 0.003 0.003 0.003 0.023
0.15 0.003 0.024 0.003 0.023 0.003 0.003
0.10 0.022 0.003 0.002 0.024 0.003 0.003
0.05 0.023 0.003 0.003 0.022 0.003 0.003
Table 7
Comparing (CMCC1) and (OWA-CMCC1-LP) under EVRs using uniformly distributed opinions in terms of consensus reaching cost and the distance between the obtained consensus opinions.
RIMQ Measured variables Experts
5 10 15 20 50 100 200 500 700 1000
${s_{0.08}}$ Cost (CMCC1) 0.1 0.46 0.67 1.28 2.41 4.82 10.13 24.18 34.23 51.36
Cost (OWA-CMCC1-LP) 0.1 0.47 0.68 1.28 2.41 4.82 10.13 24.18 34.23 51.36
Distance between prefs. 0.01 0.0 0.01 0.02 0.01 0.01 0.02 0.01 0.01 0.02
${s_{0.15}}$ Cost (CMCC1) 0.21 0.38 0.79 0.99 2.23 5.24 9.32 25.16 36.06 49.93
Cost (OWA-CMCC1-LP) 0.22 0.38 0.8 1.01 2.22 5.23 9.32 25.16 36.06 49.92
Distance between prefs. 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.02 0.02 0.02
${p_{0.5}}$ Cost (CMCC1) 0.25 0.45 0.65 1.05 2.24 4.82 10.31 24.26 37.08 48.98
Cost (OWA-CMCC1-LP) 0.26 0.45 0.65 1.05 2.24 4.82 10.31 24.26 37.08 48.98
Distance between prefs. 0.01 0.01 0.01 0.01 0.01 0.01 0.02 0.01 0.02 0.02
${p_{1}}$ Cost (CMCC1) 0.11 0.35 0.77 0.95 2.11 5.63 8.65 24.16 36.49 49.37
Cost (OWA-CMCC1-LP) 0.12 0.36 0.77 0.95 2.11 5.63 8.65 24.15 36.49 49.37
Distance between prefs. 0.0 0.0 0.01 0.01 0.0 0.02 0.01 0.01 0.02 0.02
Table 8
Comparing (CMCC1) and (OWA-CMCC1-LP) under LQs using uniformly distributed opinions in terms of consensus reaching cost and the distance between the obtained consensus opinions.
RIMQ Measured variables Experts
5 10 15 20 50 100 200 500 700 1000
${Q_{0,0}}$ Cost (CMCC1) 0.32 0.43 0.68 0.94 2.55 5.21 9.25 24.92 36.76 51.33
Cost (OWA-CMCC1-LP) 0.59 1.07 1.7 2.26 6.14 12.37 23.89 60.98 86.94 123.58
Distance between prefs. 0.06 0.07 0.08 0.08 0.08 0.09 0.08 0.09 0.09 0.09
${Q_{0.5,0.8}}$ Cost (CMCC1) 0.18 0.41 0.8 0.99 2.65 4.74 8.84 24.95 35.46 50.12
Cost (OWA-CMCC1-LP) 0.21 0.59 1.06 1.27 3.32 6.33 11.56 32.21 45.0 63.7
Distance between prefs. 0.02 0.04 0.06 0.05 0.07 0.06 0.06 0.06 0.07 0.06
${Q_{0.1,0.9}}$ Cost (CMCC1) 0.28 0.56 0.57 0.92 2.56 4.58 9.75 26.3 34.13 51.66
Cost (OWA-CMCC1-LP) 0.28 0.56 0.57 0.92 2.56 4.58 9.75 26.3 34.13 51.66
Distance between prefs. 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.02 0.02 0.02
${Q_{1,1}}$ Cost (CMCC1) 0.26 0.49 0.59 1.08 2.14 4.38 9.57 23.85 34.44 50.73
Cost (OWA-CMCC1-LP) 0.56 1.1 1.54 2.38 5.65 11.66 23.87 60.36 85.77 123.22
Distance between prefs. 0.07 0.08 0.08 0.09 0.08 0.08 0.09 0.09 0.09 0.09
Table 9
Confidence Intervals for the normalized distance between the outputs of (OWA-CMCC1-LP) and (CMCC1) with RIMQs using uniformly distributed opinions.
RIMQ Confidence interval Experts
5 20 100 500 1000
${s_{0.08}}$ LB 0.00457 0.00619 0.00726 0.01533 0.01736
UB 0.01266 0.01434 0.0129 0.01957 0.01975
${s_{0.15}}$ LB 0.00578 0.00673 0.0077 0.01563 0.01756
UB 0.01217 0.01429 0.01315 0.01994 0.01991
${p_{0.5}}$ LB 0.0036 0.00466 0.00689 0.01517 0.01725
UB 0.01169 0.01222 0.01239 0.01943 0.01965
${p_{1}}$ LB 0.00456 0.00568 0.00719 0.01531 0.01746
UB 0.01224 0.01371 0.01256 0.01954 0.01983
${Q_{0,0}}$ LB 0.04895 0.07215 0.0806 0.08714 0.08801
UB 0.06291 0.08395 0.08543 0.08994 0.08959
${Q_{0.5,0.8}}$ LB 0.02063 0.04047 0.05301 0.06179 0.06356
UB 0.03231 0.05262 0.05906 0.06589 0.06576
${Q_{0.1,0.9}}$ LB 0.00372 0.00435 0.00732 0.0156 0.01733
UB 0.01156 0.0145 0.01251 0.01986 0.01975
${Q_{1,1}}$ LB 0.04906 0.07185 0.08062 0.08715 0.08801
UB 0.06315 0.08325 0.08548 0.08994 0.08959
Table 10
Comparing (CMCC1) and (OWA-CMCC1-LP) under EVRs using symmetrically polarized opinions in terms of consensus reaching cost and the distance between the obtained consensus opinions.
RIMQ Measured variables Experts
5 10 15 20 50 100 200 500 700 1000
${s_{0.08}}$ Cost (CMCC1) 0.63 1.15 1.93 2.74 7.78 16.16 34.37 85.03 119.15 170.49
Cost (OWA-CMCC1-LP) 0.63 1.16 1.95 2.76 7.75 16.16 34.37 85.03 119.14 170.49
Distance between prefs. 0.01 0.02 0.03 0.03 0.05 0.06 0.08 0.07 0.08 0.08
${s_{0.15}}$ Cost (CMCC1) 0.47 1.28 2.12 3.23 8.12 15.8 33.51 84.94 121.47 173.39
Cost (OWA-CMCC1-LP) 0.47 1.3 2.13 3.21 8.08 15.78 33.5 84.92 121.47 173.38
Distance between prefs. 0.01 0.02 0.03 0.05 0.04 0.05 0.07 0.08 0.08 0.09
${p_{0.5}}$ Cost (CMCC1) 0.49 1.41 1.94 2.87 7.84 16.06 34.06 84.9 118.5 170.22
Cost (OWA-CMCC1-LP) 0.5 1.42 1.93 2.87 7.83 16.04 34.05 84.9 118.49 170.21
Distance between prefs. 0.01 0.01 0.03 0.03 0.05 0.06 0.06 0.07 0.07 0.07
${p_{1}}$ Cost (CMCC1) 0.5 1.22 1.85 2.72 7.88 15.86 33.49 84.21 119.35 172.19
Cost (OWA-CMCC1-LP) 0.51 1.24 1.88 2.72 7.84 15.83 33.46 84.21 119.35 172.19
Distance between prefs. 0.01 0.02 0.02 0.05 0.04 0.05 0.07 0.07 0.08 0.08
Table 11
Comparing (CMCC1) and (OWA-CMCC1-LP) under LQs using symmetrically polarized opinions in terms of consensus reaching cost and the distance between the obtained consensus opinions.
RIMQ Measured variables Experts
5 10 15 20 50 100 200 500 700 1000
${Q_{0,0}}$ Cost (CMCC1) 0.5 1.16 1.81 2.72 7.54 16.57 33.64 85.02 120.03 172.75
Cost (OWA-CMCC1-LP) 0.73 1.74 2.67 3.9 10.67 22.28 44.6 112.32 157.65 225.86
Distance between prefs. 0.05 0.09 0.07 0.1 0.11 0.12 0.14 0.13 0.14 0.14
${Q_{0.5,0.8}}$ Cost (CMCC1) 0.56 1.12 2.1 2.87 7.79 16.53 33.82 85.48 119.64 173.04
Cost (OWA-CMCC1-LP) 0.66 1.25 2.35 3.12 8.26 17.04 34.51 85.92 120.02 173.96
Distance between prefs. 0.02 0.06 0.09 0.08 0.1 0.12 0.1 0.11 0.11 0.11
${Q_{0.1,0.9}}$ Cost (CMCC1) 0.6 1.15 2.29 2.91 8.13 16.28 33.94 84.74 121.55 171.28
Cost (OWA-CMCC1-LP) 0.6 1.16 2.29 2.91 8.12 16.28 33.94 84.74 121.55 171.28
Distance between prefs. 0.02 0.01 0.04 0.03 0.06 0.09 0.07 0.08 0.09 0.08
${Q_{1,1}}$ Cost (CMCC1) 0.5 1.13 2.04 3.05 7.81 16.13 34.03 84.45 120.58 169.64
Cost (OWA-CMCC1-LP) 0.77 1.59 2.93 4.22 10.68 21.95 45.09 111.76 157.57 224.21
Distance between prefs. 0.06 0.12 0.1 0.13 0.12 0.12 0.13 0.13 0.14 0.13
Table 12
Comparing (CMCC1) and (OWA-CMCC1-LP) under EVRs using asymmetrically polarized opinions in terms of consensus reaching cost and the distance between the obtained consensus opinions.
RIMQ Measured variables Experts
5 10 15 20 50 100 200 500 700 1000
${s_{0.08}}$ Cost (CMCC1) 1.04 2.48 3.69 5.78 14.53 26.65 51.19 135.75 183.89 261.64
Cost (OWA-CMCC1-LP) 1.04 2.48 3.69 5.77 14.52 26.64 51.19 135.75 183.89 261.64
Distance between prefs. 0.01 0.04 0.02 0.04 0.03 0.03 0.03 0.05 0.05 0.05
${s_{0.15}}$ Cost (CMCC1) 0.53 3.16 4.79 4.86 11.9 28.06 50.31 129.09 185.31 256.71
Cost (OWA-CMCC1-LP) 0.55 3.17 4.79 4.86 11.89 28.06 50.3 129.09 185.31 256.71
Distance between prefs. 0.01 0.09 0.03 0.02 0.03 0.04 0.04 0.04 0.05 0.05
${p_{0.5}}$ Cost (CMCC1) 0.94 2.74 3.72 4.26 13.03 25.95 50.91 131.88 184.14 261.01
Cost (OWA-CMCC1-LP) 0.94 2.74 3.72 4.26 13.03 25.95 50.91 131.88 184.14 261.01
Distance between prefs. 0.0 0.08 0.03 0.04 0.03 0.03 0.03 0.04 0.05 0.05
${p_{1}}$ Cost (CMCC1) 1.52 2.56 3.08 4.89 12.67 27.5 52.46 129.0 179.08 258.13
Cost (OWA-CMCC1-LP) 1.52 2.57 3.08 4.89 12.67 27.5 52.46 129.0 179.08 258.13
Distance between prefs. 0.0 0.02 0.02 0.04 0.03 0.03 0.03 0.03 0.03 0.04
Table 13
Comparing (CMCC1) and (OWA-CMCC1-LP) under LQs using asymmetrically polarized opinions in terms of consensus reaching cost and the distance between the obtained consensus opinions.
RIMQ Measured variables Experts
5 10 15 20 50 100 200 500 700 1000
${Q_{0,0}}$ Cost (CMCC1) 1.01 1.99 4.75 5.03 11.99 28.23 51.27 134.1 186.8 267.93
Cost (OWA-CMCC1-LP) 1.34 2.62 5.78 6.31 15.43 34.82 64.78 167.47 233.77 334.63
Distance between prefs. 0.07 0.09 0.11 0.1 0.11 0.11 0.11 0.11 0.11 0.11
${Q_{0.5,0.8}}$ Cost (CMCC1) 1.1 1.58 3.93 5.15 13.51 27.75 54.58 128.84 182.83 262.53
Cost (OWA-CMCC1-LP) 1.2 1.76 4.2 5.46 14.26 29.07 56.32 133.99 189.32 270.69
Distance between prefs. 0.04 0.06 0.07 0.07 0.08 0.08 0.08 0.08 0.08 0.08
${Q_{0.1,0.9}}$ Cost (CMCC1) 0.83 1.79 3.63 4.56 12.75 24.34 50.9 126.01 192.61 253.49
Cost (OWA-CMCC1-LP) 0.83 1.79 3.63 4.56 12.75 24.34 50.9 126.01 192.61 253.49
Distance between prefs. 0.01 0.04 0.02 0.03 0.04 0.05 0.04 0.04 0.04 0.04
${Q_{1,1}}$ Cost (CMCC1) 0.51 1.78 3.67 5.6 12.29 22.97 54.66 135.14 179.7 261.25
Cost (OWA-CMCC1-LP) 0.78 2.37 4.65 6.93 15.73 29.83 67.86 168.59 226.31 328.08
Distance between prefs. 0.07 0.1 0.09 0.11 0.11 0.11 0.11 0.11 0.11 0.11
Table 14
Confidence Intervals for the normalized distance between the outputs of models (OWA-CMCC1-LP) and (CMCC1) with RIMQs using symmetrically distributed opinions.
RIMQ Confidence interval Experts
5 20 100 500 1000
${s_{0.08}}$ LB 0.00699 0.02321 0.0418 0.07826 0.08312
UB 0.0135 0.03918 0.05442 0.09289 0.09246
${s_{0.15}}$ LB 0.00985 0.02538 0.04292 0.07941 0.08484
UB 0.01509 0.0496 0.05491 0.09456 0.09436
${p_{0.5}}$ LB 0.0052 0.02164 0.04118 0.0774 0.08308
UB 0.01267 0.03716 0.0541 0.0891 0.09284
${p_{1}}$ LB 0.00724 0.02347 0.04153 0.0798 0.08478
UB 0.01366 0.04018 0.05404 0.09137 0.09381
${Q_{0,0}}$ LB 0.04749 0.08615 0.11216 0.13223 0.13616
UB 0.05948 0.10231 0.1216 0.13709 0.13857
${Q_{0.5,0.8}}$ LB 0.02322 0.06444 0.09513 0.10697 0.11081
UB 0.032 0.08674 0.10672 0.1141 0.11599
${Q_{0.1,0.9}}$ LB 0.00557 0.02207 0.04229 0.07677 0.08342
UB 0.01262 0.03787 0.05506 0.08763 0.09256
${Q_{1,1}}$ LB 0.04832 0.08642 0.11178 0.13226 0.13619
UB 0.06079 0.10828 0.12097 0.13717 0.1386
Table 15
Confidence Intervals for the normalized distance between the outputs of models (OWA-CMCC1-LP) and (CMCC1) with RIMQs using asymmetrically distributed opinions.
RIMQ Confidence measure Experts
5 20 100 500 1000
${s_{0.08}}$ Mean 0.01057 0.02883 0.03168 0.04716 0.05275
Std 0.0149 0.03437 0.01491 0.00863 0.00523
LB CI 0.00634 0.01907 0.02745 0.04471 0.05126
UB CI 0.01481 0.0386 0.03592 0.04962 0.05424
${s_{0.15}}$ Mean 0.01154 0.02712 0.03205 0.04011 0.05212
Std 0.01218 0.02542 0.01385 0.00893 0.00524
LB CI 0.00808 0.0199 0.02811 0.03758 0.05063
UB CI 0.015 0.03435 0.03598 0.04265 0.05361
${p_{0.5}}$ Mean 0.009 0.02857 0.02948 0.03889 0.0477
Std 0.01496 0.03133 0.01499 0.00878 0.00513
LB CI 0.00475 0.01966 0.02522 0.03639 0.04624
UB CI 0.01325 0.03747 0.03374 0.04138 0.04916
${p_{1}}$ Mean 0.00949 0.03093 0.02895 0.03476 0.04489
Std 0.01404 0.03349 0.01397 0.00744 0.00666
LB CI 0.0055 0.02141 0.02498 0.03265 0.043
UB CI 0.01348 0.04044 0.03292 0.03688 0.04679
Theorem 1.
The model (OWA-CMCC1) can be transformed into an equivalent BLP problem with the help of the following linear transformations:
\[\begin{aligned}{}& {\overline{o}_{k}}-{o_{k}}={u_{k}}\hspace{1em}\textit{and}\hspace{1em}|{\overline{o}_{k}}-{o_{k}}|={v_{k}},\hspace{1em}k=1,2,\dots ,m,\\ {} & {\overline{o}_{k}}-\overline{g}={y_{k}}\hspace{1em}\textit{and}\hspace{1em}|{\overline{o}_{k}}-\overline{g}|={z_{k}},\hspace{1em}k=1,2,\dots ,m,\end{aligned}\]
resulting in the optimization problem:
(OWA-CMCC1-BLP)
\[\begin{aligned}{}& \min {\sum \limits_{k=1}^{m}}{c_{k}}{v_{k}},\end{aligned}\]
(1)
\[\begin{aligned}{}& \textit{s.t.}\left\{\begin{array}{l@{\hskip4.0pt}l}\overline{g}={\textstyle\textstyle\sum _{k=1}^{m}}{\omega _{k}}{t_{k}},\hspace{2em}& \text{(a)}\\ {} {t_{k}}\leqslant {\overline{o}_{d}}+M{G_{kd}},\hspace{1em}k,d=1,2,\dots ,m,\hspace{2em}& \text{(b)}\\ {} {t_{k}}\leqslant {\overline{o}_{d}}-M{F_{kd}}\hspace{1em}k,d=1,2,\dots ,m,\hspace{2em}& \text{(c)}\\ {} {\textstyle\textstyle\sum _{d=1}^{m}}{G_{kd}}\leqslant m-k,\hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(d)}\\ {} {\textstyle\textstyle\sum _{d=1}^{m}}{F_{kd}}\leqslant k-1,\hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(e)}\\ {} {\overline{o}_{k}}-{o_{k}}={u_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(f)}\\ {} {u_{k}}\leqslant {v_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(g)}\\ {} -{u_{k}}\leqslant {v_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(h)}\\ {} {\overline{o}_{k}}-\overline{g}={y_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(i)}\\ {} {y_{k}}\leqslant {z_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(j)}\\ {} -{y_{k}}\leqslant {z_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(k)}\\ {} {z_{k}}\leqslant \varepsilon ,\hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(l)}\\ {} 1-{\textstyle\textstyle\sum _{k=1}^{m}}{w_{k}}{z_{k}}\geqslant {\mu _{0}},\hspace{2em}& \text{(m)}\\ {} {G_{kd}},{F_{kd}}\in \{0,1\},\hspace{1em}k,d=1,2,\dots ,m,\hspace{2em}& \text{(n)}\end{array}\right.\end{aligned}\]
where ${t_{d}}$ $(d=1,2,\dots ,m)$ is the d-th largest value in $\{{\overline{o}_{1}},{\overline{o}_{2}},\dots ,{\overline{o}_{m}}\}$ and M is a positive large number.
Theorem 2.
The model (OWA-CMCC2) can be transformed into an equivalent BLP problem with the following transformations:
\[\begin{aligned}{}& {\overline{o}_{k}}-{o_{k}}={u_{k}}\hspace{1em}\textit{and}\hspace{1em}|{\overline{o}_{k}}-{o_{k}}|={v_{k}},\hspace{1em}k=1,2,\dots ,m,\\ {} & {\overline{o}_{k}}-{\overline{o}_{l}}={y_{kl}}\hspace{1em}\textit{and}\hspace{1em}|{\overline{o}_{k}}-{\overline{o}_{l}}|={z_{kl}},\hspace{1em}k=1,2,\dots ,m-1,\hspace{2.5pt}l=k+1,\dots ,m,\end{aligned}\]
resulting in the following model:
(OWA-CMCC2-BLP)
\[ \begin{aligned}{}\min & \hspace{5.0pt}{\sum \limits_{k=1}^{m}}{c_{k}}{v_{k}}\\ {} \textit{s.t.}& \hspace{5.0pt}\left\{\begin{array}{l}\overline{g}={\textstyle\textstyle\sum _{k=1}^{m}}{\omega _{k}}{t_{k}},\hspace{1em}\\ {} {t_{k}}\leqslant {\overline{o}_{d}}+M{G_{kd}},\hspace{1em}k,d=1,2,\dots ,m,\hspace{1em}\\ {} {t_{k}}\leqslant {\overline{o}_{d}}-M{F_{kd}}\hspace{1em}k,d=1,2,\dots ,m,\hspace{1em}\\ {} {\textstyle\textstyle\sum _{d=1}^{m}}{G_{kd}}\leqslant m-k,\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} {\textstyle\textstyle\sum _{d=1}^{m}}{F_{kd}}\leqslant k-1,\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} {\overline{o}_{k}}-\overline{g}\leqslant \varepsilon ,\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} \overline{g}-{\overline{o}_{k}}\leqslant \varepsilon ,\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} {\overline{o}_{k}}-{o_{k}}={u_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} {u_{k}}\leqslant {v_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} -{u_{k}}\leqslant {v_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} {\overline{o}_{{k_{1}}}}-{\overline{o}_{{k_{2}}}}={y_{{k_{1}}{k_{2}}}},\hspace{1em}{k_{1}}=1,2,\dots ,m-1,\hspace{2.5pt}{k_{2}}={k_{1}}+1,\dots ,m,\hspace{1em}\\ {} {y_{{k_{1}}{k_{2}}}}\leqslant {z_{{k_{1}}{k_{2}}}},\hspace{1em}{k_{1}}=1,2,\dots ,m-1,\hspace{2.5pt}{k_{2}}={k_{1}}+1,\dots ,m,\hspace{1em}\\ {} -{y_{{k_{1}}{k_{2}}}}\leqslant {z_{{k_{1}}{k_{2}}}},\hspace{1em}{k_{1}}=1,2,\dots ,m-1,\hspace{2.5pt}{k_{2}}={k_{1}}+1,\dots ,m,\hspace{1em}\\ {} 1-{\textstyle\textstyle\sum _{{k_{1}}=1}^{m-1}}{\textstyle\textstyle\sum _{{k_{2}}={k_{1}}+1}^{m}}\frac{{w_{{k_{1}}}}+{w_{{k_{2}}}}}{m-1}{z_{{k_{1}}{k_{2}}}}\geqslant {\mu _{0}},\hspace{1em}\\ {} {G_{kd}},\hspace{2.5pt}{F_{kd}}\in \{0,1\},\hspace{1em}k,d=1,2,\dots ,m,\hspace{1em}\\ {} {u_{k}},{v_{k}}\geqslant 0,\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} {y_{{k_{1}}{k_{2}}}},{z_{{k_{1}}{k_{2}}}}\geqslant 0,\hspace{1em}{k_{1}}=1,2,\dots ,m-1,\hspace{2.5pt}{k_{2}}={k_{1}}+1,\dots ,m.\hspace{1em}\end{array}\right.\end{aligned}\]
Theorem 3.
Let $({\overline{o}_{1}},{\overline{o}_{2}},\dots ,{\overline{o}_{m}})$ be the optimal solution to the following optimization problem.
(OWA-CMCC1-LP)
\[ \begin{array}{l}\min \displaystyle \frac{1}{m}{\displaystyle \sum \limits_{k=1}^{m}}|{\overline{o}_{k}}-{o_{k}}|,\\ {} \text{s.t.}\left\{\begin{array}{l}\overline{g}={\textstyle\textstyle\sum _{k=1}^{m}}{\omega _{k}}{\overline{o}_{\sigma (i)}},\hspace{1em}\\ {} |{\overline{o}_{k}}-\overline{g}|\leqslant \varepsilon ,\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} 1-\frac{1}{m}{\textstyle\textstyle\sum _{k=1}^{m}}|{\overline{o}_{k}}-\overline{g}|\geqslant {\mu _{0}},\hspace{1em}\\ {} {\overline{o}_{\sigma (k)}}-{\overline{o}_{\sigma (k-1)}}\leqslant 0,\hspace{1em}k=2,\dots ,m,\hspace{1em}\end{array}\right.\end{array}\]
where σ is a permutation that decreasingly orders the original values of the preferences values ${o_{1}},{o_{2}},\dots ,{o_{m}}$ and ${\omega _{1}},{\omega _{2}},\dots ,{\omega _{m}}$ are the values of the weights used in the OWA aggregation. Then $({\overline{o}_{1}},{\overline{o}_{2}},\dots ,{\overline{o}_{m}})$ is also an optimal solution to (OWA-CMCC1).
Theorem 4.
Let $({\overline{o}_{1}},{\overline{o}_{2}},\dots ,{\overline{o}_{m}})$ be the optimal solution to the following optimization problem:
(OWA-CMCC2-LP)
\[ \begin{array}{l}\min \displaystyle \frac{1}{m}{\displaystyle \sum \limits_{k=1}^{m}}|{\overline{o}_{k}}-{o_{k}}|,\\ {} \text{s.t.}\left\{\begin{array}{l}\overline{g}={\textstyle\textstyle\sum _{k=1}^{m}}{\omega _{i}}{\overline{o}_{\sigma (k)}},\hspace{1em}\\ {} |{\overline{o}_{k}}-\overline{g}|\leqslant \varepsilon ,\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} 1-\frac{2}{m(m-1)}{\textstyle\textstyle\sum _{k=1}^{m-1}}{\textstyle\textstyle\sum _{l=k+1}^{m}}|{\overline{o}_{k}}-{\overline{o}_{l}}|\geqslant {\mu _{0}},\hspace{1em}\\ {} {\overline{o}_{\sigma (k)}}-{\overline{o}_{\sigma (k-1)}}\leqslant 0,\hspace{1em}k=2,\dots ,m,\hspace{1em}\end{array}\right.\end{array}\]
where σ is a permutation that decreasingly orders the original values of the preferences values ${o_{1}},{o_{2}},\dots ,{o_{m}}$ and $\omega =({\omega _{1}},{\omega _{2}},\dots ,{\omega _{m}})$ be the weight vector used in the OWA aggregation. Then $({\overline{o}_{1}},{\overline{o}_{2}},\dots ,{\overline{o}_{m}})$ is also an optimal solution to (OWA-CMCC2).
Theorem 5.
The (OWA-CMCC1) model under OWA aggregation with decreasing weight vector can be transformed into an equivalent linear programming problem with the help of the following linear transformations:
\[\begin{aligned}{}& {\overline{o}_{k}}-{o_{k}}={u_{k}}\hspace{1em}\textit{and}\hspace{1em}|{\overline{o}_{k}}-{o_{k}}|={v_{k}},\hspace{1em}k=1,2,\dots ,m,\\ {} & {\overline{o}_{k}}-\overline{g}={y_{k}}\hspace{1em}\textit{and}\hspace{1em}|{\overline{o}_{k}}-\overline{g}|={z_{k}},\hspace{1em}k=1,2,\dots ,m,\end{aligned}\]
which lead to the linear programming problem:
(OWA-CMCC1-DW)
\[\begin{aligned}{}& \min {\sum \limits_{k=1}^{m}}{c_{k}}{v_{k}}\end{aligned}\]
(2)
\[\begin{aligned}{}& \text{s.t.}\left\{\begin{array}{l@{\hskip4.0pt}l}\overline{g}={\textstyle\textstyle\sum _{k=1}^{m}}({\omega _{k}}-{\omega _{k+1}})(k{b_{k}}+{\textstyle\textstyle\sum _{d=1}^{m}}{\eta _{kd}}),\hspace{2em}& \text{(a)}\\ {} {\overline{o}_{k}}-{o_{k}}={u_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(b)}\\ {} {u_{k}}\leqslant {v_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(c)}\\ {} -{u_{k}}\leqslant {v_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(d)}\\ {} {\overline{o}_{k}}-\overline{g}={y_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(e)}\\ {} {y_{k}}\leqslant {z_{k}}\hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(f)}\\ {} -{y_{k}}\leqslant {z_{k}}\hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(g)}\\ {} {z_{k}}\leqslant \varepsilon \hspace{1em}k=1,2,\dots ,m,\hspace{2em}& \text{(h)}\\ {} 1-{\textstyle\textstyle\sum _{k=1}^{m}}{w_{k}}{z_{k}}\geqslant {\mu _{0}},\hspace{2em}& \text{(i)}\\ {} {b_{k}}+{\eta _{kd}}\geqslant {\overline{o}_{d}},\hspace{1em}k,d=1,\dots ,m,\hspace{2em}& \text{(j)}\\ {} {\eta _{kd}}\geqslant 0,\hspace{1em}k,d=1,\dots ,m.\hspace{2em}& \text{(k)}\end{array}\right.\end{aligned}\]
Theorem 6.
The (OWA-CMCC2) model under OWA aggregation with decreasing weight vector can be converted into the equivalent linear programming problem using the transformation:
\[\begin{aligned}{}& {\overline{o}_{k}}-{o_{k}}={u_{k}}\hspace{1em}\textit{and}\hspace{1em}|{\overline{o}_{k}}-{o_{k}}|={v_{k}},\hspace{1em}k=1,2,\dots ,m,\\ {} & {\overline{o}_{k}}-{\overline{o}_{l}}={y_{kl}}\hspace{1em}\textit{and}\hspace{1em}|{\overline{o}_{k}}-{\overline{o}_{l}}|={z_{kl}},\hspace{1em}k=1,2,\dots ,m-1,\hspace{2.5pt}l=k+1,\dots ,m,\end{aligned}\]
and it can be formulated as the following linear programming model.
(OWA-CMCC2-DW)
\[ \begin{aligned}{}\min & \hspace{5.0pt}{\sum \limits_{k=1}^{m}}{c_{k}}{v_{k}}\\ {} \text{s.t.}& \hspace{5.0pt}\left\{\begin{array}{l}\overline{g}={\textstyle\textstyle\sum _{k=1}^{m}}({\omega _{k}}-{\omega _{k+1}})(k{b_{k}}+{\textstyle\textstyle\sum _{d=1}^{m}}{\eta _{kd}}),\hspace{1em}\\ {} {\overline{o}_{k}}-\overline{g}\leqslant \varepsilon ,\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} \overline{g}-{\overline{o}_{k}}\leqslant \varepsilon ,\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} {\overline{o}_{k}}-{o_{k}}={u_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} {u_{k}}\leqslant {v_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} -{u_{k}}\leqslant {v_{k}},\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} {\overline{o}_{k}}-{\overline{o}_{l}}={y_{kl}},\hspace{1em}k=1,2,\dots ,m-1,\hspace{2.5pt}l=k+1,\dots ,m,\hspace{1em}\\ {} {y_{kl}}\leqslant {z_{kl}},\hspace{1em}k=1,2,\dots ,m-1,\hspace{2.5pt}l=k+1,\dots ,m,\hspace{1em}\\ {} -{y_{kl}}\leqslant {z_{kl}},\hspace{1em}k=1,2,\dots ,m-1,\hspace{2.5pt}l=k+1,\dots ,m,\hspace{1em}\\ {} 1-{\textstyle\textstyle\sum _{k=1}^{m-1}}{\textstyle\textstyle\sum _{l=k+1}^{m}}\frac{{w_{k}}+{w_{l}}}{m-1}{z_{kl}}\geqslant {\mu _{0}},\hspace{1em}\\ {} {b_{k}}+{\eta _{kd}}\geqslant {\overline{o}_{d}},\hspace{1em}k,d=1,\dots ,m,\hspace{1em}\\ {} {\eta _{kd}}\geqslant 0,\hspace{1em}k,d=1,\dots ,m,\hspace{1em}\\ {} {u_{k}},{v_{k}}\geqslant 0,\hspace{1em}k=1,2,\dots ,m,\hspace{1em}\\ {} {z_{kl}}\geqslant 0,\hspace{1em}k=1,2,\dots ,m-1,\hspace{2.5pt}l=k+1,\dots ,m.\hspace{1em}\end{array}\right.\end{aligned}\]

INFORMATICA

  • Online ISSN: 1822-8844
  • Print ISSN: 0868-4952
  • Copyright © 2023 Vilnius University

About

  • About journal

For contributors

  • OA Policy
  • Submit your article
  • Instructions for Referees
    •  

    •  

Contact us

  • Institute of Data Science and Digital Technologies
  • Vilnius University

    Akademijos St. 4

    08412 Vilnius, Lithuania

    Phone: (+370 5) 2109 338

    E-mail: informatica@mii.vu.lt

    https://informatica.vu.lt/journal/INFORMATICA
Powered by PubliMill  •  Privacy policy