An Improved Rank Order Centroid Method (IROC) for Criteria Weight Estimation: An Application in the Engine/Vehicle Selection Problem

. The focus of this paper is on the criteria weight approximation in Multiple Criteria Decision Making (MCDM). An approximate weighting method produces the weights that are surrogates for the exact values that cannot be elicited directly from the DM. In this ﬁeld, a very famous model is Rank Order Centroid (ROC). The paper shows that there is a drawback to the ROC method that could be resolved. The paper gives an idea to develop a revised version of the ROC method called Improved ROC (IROC). The behaviour of the IROC method is investigated using a set of simulation experiments. The IROC method could be employed in situations of time pressure, imprecise information, etc. The paper also proposes a methodology including the application of the IROC method in a group decision making mode, to estimate the weights of the criteria in a tree-shaped structure. The proposed methodology is useful for academics/managers/decision makers who want to deal with MCDM problem. A study case is examined to show applicability of the proposed methodology in a real-world situation. This case is engine/vehicle selection problem, that is one of the fundamental challenges of road transport sector of any country.


Introduction
This paper concerns the problem of determination of numerical weights for different criteria indicating their relative importance in Multiple Criteria Decision Making (MCDM).Different methods have been suggested in the literature and can be classified very roughly into three approaches: subjective, objective and integrated (Ahn, 2011;Hatefi, 2019).The subjective methods assign the weights to the criteria solely according to the preferential judgments by the DM, for example, Direct Rating (DR) (Doyle et al., 1997), Step-Wise Weight Assessment Ratio Analysis (SWARA) (Kersuliene et al., 2010), and belief-based Best Worst Method (BWM) (Liang et al., 2021).On the other hand, in the case of objective weighting methods, the DM may not be willing or able to give any preference information on the criteria, for instance, entropy (Hwang and Yoon, 1981), Correlation Coefficient and Standard Deviation (CCSD) (Wang and Lou, 2010), and Simultaneous Evaluation of Criteria and Alternatives (SECA) (Keshavarz Ghorabaee et al., 2018).The integrated methods determine the weights of the criteria using both subjective and objective information, for instance, Simple Product Aggregation (SPA) (Hwang and Yoon, 1981), Factor Relationship (FARE) (Ginevičius, 2011), and Block-Wise Rating the Attribute Weights (BRAW) (Hatefi, 2021).
In the current paper, we put emphasis on the Barron and Barrett (1996)'s notion who stated that various subjective methods for eliciting the exact weights from the DM may suffer on several counts, because the results are highly dependent on the elicitation method and there is no agreement as to which weighting method generates more valid weights.On the other hand, we know that in recent years, the multi-criteria group decision making situations have received extensive attention (Diao et al., 2022).In such situations, reaching a consensus regarding the weights of several criteria is difficult particularly when precise weights are required from the DMs (Sureeyatanapas et al., 2018;Danielson and Ekenberg, 2016;Ahn and Park, 2008a).Furthermore, the larger the number of criteria, the lower the accuracy of their subjective evaluation (Ginevičius, 2011).On the other hand, it is much easier for the DM to prioritize the criteria rather than to give specific numerical values (Alfares and Duffuaa, 2016).To relieve such issues, a family of the integrated methods called approximate (or surrogate) weighting methods has been developed.The methods in this family are shown by typology notation I/+SW, i.e.Integrated & Surrogate Weighting (Hatefi, 2022).The approximate weighting methods assume that the exact values of the weights are not known and only a ranking structure of the criteria (i.e.ordinal information about criteria importance) is given by the DM.An approximate weighting method begins with a simple sort where the DM arranges the criteria in the order of his/her preference.Secondly, an ordinal number called rank order is assigned to each criterion ranked, starting with the highest ranked criterion as 1.Finally, the criteria weights are estimated using a predetermined function or procedure.Clearly, the approximate weighting methods convert ranks of the criteria into quantitative weights.For the ranked criteria, the weights should be according to criteria weight space as in equation (1) in which ω j is the weight of criterion C j (j = 1, 2, . . ., n) by rank order j .
There are several approximate weighting methods in the related literature, such as Equal Weights (EW) (Dawes and Corrigan, 1974), Rank Sum (RS), Rank Exponent (RE) and Rank Reciprocal (RR) (Stillwell et al., 1981), Rank Order Centroid (ROC) (Barron, 1992), Geometric Weights (GW) (Lootsma, 1999), Rank Order Distribution (ROD) (Roberts and Goodwin, 2002), Variable-Slope Linear (VSL) (Alfares and Duffuaa, 2008), Least Square Ordered Weighted Averaging (LSOWA) (Ahn and Park, 2008b), Maximum Entropy Ordered Weighted Averaging (MEOWA) (Ahn, 2011), Sum-Reciprocal (SR) (Danielson and Ekenberg, 2014), Generalized Rank Sum (GRS) (Wang and Zionts, 2015), Minimizing Squared Deviations from extreme points (MSD) (Ahn, 2017), Rank Order Total (ROT) (Liu et al., 2020), and Generalized Rank Order Centroid (GROC) (Hatefi and Balilehvand, 2023).Among the above methods, the ROC method is the most famous method in the stateof-the-art.The ROC method assumes that the weights are uniformly distributed on the simplex of the weight space.Many researchers affirm the superiority of the ROC method over the other relevant methods.Srivastava et al. (1995) stated that the ROC weights suggest alternatives that are highly correlated with actual choices made.Additionally, according to Katsikopoulos and Fasolo (2006), previous researches have been implied that the ROC weights produce the same preferences over alternatives as a full MCDM model in nearly 85% of cases and that, when the ROC method does not produce the same choice, the average loss is fairly small.According to Ahn (2017), the ROC method is still known to outperform the other approximate weighting methods.He stated that a common result from former studies is that the ROC method not only has an appealing theoretical rationale, but also appears to outperform the other approximate weighting methods.Sureeyatanapas et al. (2018) expressed that according to several studies focusing on decision behaviour, the ROC method due to its characteristics is likely to be mostly consistent with DM's behaviour.They also argued that from the last studies to date in the respected literature, many comparative studies have found that the ROC method outperforms the other methods in most experimental scenarios and measures.The abovementioned consequences have been confirmed by Morais et al. (2015).Let's review some simulation studies on the performance of the ROC method.Barron and Barrett (1996) compared the quality of four methods (the EW, RS, RR, and ROC) using the simulation approach.They deduced that the ROC method outperforms the others in most scenarios.The superiority of the ROC method over the EW, RS, and RR methods is also confirmed by Ahn and Park (2008a) under different simulation conditions.Sarabando and Dias (2009) performed a series of simulations to compare the quality of the ROC method and some decision rules.The results corroborated the ROC method is the best rule to be used particularly as the number of criteria increases.Ahn (2011) performed a simulation process to compare the performance of the EW, RS, RR, ROC, and MEOWA methods.In summary, the results showed MEOWA = ROC > RR > RS > EW.
Definitely, in situations such as time pressure, lack of enough knowledge, imprecise or incomplete DM's information, and DM's limited attention, an approximate weighting method could be used as a surrogate for subjective methods.In fact, an approximate weighting method generates the weights that are substitutes for the exact values that cannot be drawn out directly from the DM.Hence, the relevant researchers seek to devise new methods that generate approximate weights as close as possible to real-world exact values, and this is why several methods are investigated and suggested.There even may be a slight significant difference between the weights generated by two methods; in this regard, Bottomley and Doyle (2001) proved that whilst several weighting methods may appear to be minor variants of one another, these nuances may have substantial consequences for inference and decision making.Such a result was confirmed by Zizovic et al. (2020).As a matter of fact, although there are several methods in the literature, but a new method may generate a more appropriate weight vector which may be slightly different than the others, and this slight difference may even change decision making results.Thus in line with the abovementioned researches, the major motivation of the current paper is to improve the ROC method, and to reinforce its theoretical foundations.In Section 2, the paper explains how the ROC method can be improved to a new version called Improved ROC (IROC).In Section 3, the procedure of a methodology to employ the IROC method is offered.In the proposed methodology, we assume a group of subject matter experts (i.e. the DMs), who are faced with the problem of weighting a variety of the criteria.To overcome multiplicity of the criteria, a Criteria Breakdown Structure (CBS) is provided.The CBS is a tree-shaped (in 1st level, 2nd level, 3rd level, etc.) description of all the criteria which should be weighted.The IROC method is used in weight assignment of each level of the CBS.In Section 4, the paper applies the proposed methodology in a real-life study case taken from transportation industry.Finally, some conclusions are provided in Section 5.

The Proposed Idea
Definitely, each feasible point in the weight space is a solution to assign the weights to the ranked criteria.Among these solutions, the defining vertices of the convex polyhedral of the weight space can be considered as Vertex Methods (VM) which are (1, 0, . . ., 0), 1 2 , 1 2 , 0, . . ., 0 , . . ., and 1 n , 1 n , . . ., 1 n .The coordinates of the weight space centroid (i.e. the ROC weights) are calculated by ordinary averaging the corresponding coordinates of the VMs.As a matter of fact, as shown in equation (2), the ROC method is a convex linear combination of the VMs in which all the coefficients equal 1/n.

ROC weights:
We interpret that the coefficients in equation ( 2) address the DM's preferences on the VMs.But for example, can we usually claim that the DM's preference on (1, 0, . . ., 0) equal to that on 1 n , 1 n , . . ., 1 n ?Positively, the preferential judgments of the DM on different points of the weight space maybe alike; correspondingly, setting equal coefficients for the VMs is not based on a logical assumption.Taking this issue into consideration, the new idea is to use appropriate coefficients for different VMs.In short, the notion is replacing the equal coefficients 1/n by different coefficients denoted by ϕ jn .That simply means the application of weighted averaging instead of ordinary averaging in the ROC formula.We call this improved version of the ROC method "IROC method".Equation (3) gives the IROC weights: IROC weights: (3)

Determining the IROC Coefficients
Two approaches can be used to obtain the coefficients ϕ jn (j = 1, 2, . . ., n) in equation (3).An idea in point is the extraction of the coefficients from the DM's perspectives, i.e. a subjective approach.Another notion, the focus of this paper, is to determine default coefficients, i.e. an objective approach.This approach is useful in cases where the DM has no idea or consent to propose his/her preferences on the VMs, lack of enough time, etc.In order to estimate the coefficients, a set of systematic simulation experiments was performed, with regard to MADM problem.MADM problem refers to selecting the most appropriate candidate among m predetermined alternatives or prioritizing them in the presence of usually conflicting n criteria (Hwang and Yoon, 1981;Hatefi, 2021).Generally, a MADM problem is shown by matrix [a ij ] m×n in which a ij (i = 1, . . ., m; j = 1, . . ., n) is called performance score of ith alternative with respect to j th criterion.In the simulation, we use the Multi-Attribute Additive Value (MAV) function as the evaluation index to calculate the aggregated value of ith alternative.This function, with equation ( 4), is widely used as the underlying analysis model to calculate the overall value of the alternatives (Danielson and Ekenberg, 2016).In the MAV function, it is assumed that sum of all weights ω j equals one and 0 a ij 1 (Keeney and Raiffa, 1993).If we use the weights produced by a given method in the MAV function to select the best alternative or to prioritize the alternatives, let us call the result decision made by that given method. (4) The systematic simulation study was firstly proposed by Barron and Barrett (1996), and is a broadly accepted framework to address the performance of any approximate weighting method.Many investigations have employed such a simulation study, such as Hatefi (2019), Ahn (2017), and Ahn and Park (2008a).According to the basic notion of this approach, there exists a set of true weights as the reference weights in the DM's mind which are not accessible in its pure form by any elicitation method.The decision made by the true weights is called true decision.The idea is to generate the weights by the method to be examined (herein the IROC method) as well as the true weights from an underlying random distribution and address how well the decision made by the method match the true decision in terms of a given efficacy measure.To this end, Hit Ratio (HR) and Rank order Correlation (RC) have been widely used as efficacy measures.The HR evaluates how frequently a method selects the same best alternative as the true weights.Equation ( 5) presents the HR function for a given method in which π is the total number of simulation runs, and γ is the number of simulation runs in which the method selects the same best alternative as the true weights do.The HR ranges from 0 to 1, in the way that 1 means the best alternative of the two rank orders are the same, throughout whole simulation runs.The RC indicates the similarity of the overall rank structures of the alternatives made by the true weights and by the method.This measure is calculated by Kendall's formula as equation ( 6) (Winkler and Hays, 1985).In this function, m is the number of alternatives, and θ is the number of pairwise preference violations between the rank structures of the alternatives by the method and by the true weights.Obviously, the values range from −1 to 1 for the RC, the value 1 stands for perfect correspondence between the two rank orders.
The simulation was designed with four levels of the alternatives (m = 3, 5, 7, 10) and twenty four levels of the criteria (n = 2, 3, . . ., 25).For each combination of the number of alternatives and the number of criteria (96 combinations), the following procedure was repeated N = 15000 times.
Step 2: Generate a normalized random MADM matrix: Firstly, random performance scores a ij are generated from independent uniform distribution on interval (0, 1).These scores constitute an m × n MADM matrix.Secondly, the performance scores in each column are normalized by equation ( 7).In this equation, the values a max j and a min j are the maximum and minimum scores in column C j : Step 3: Generate criteria true weights: Firstly, n − 1 random numbers are generated from independent uniform distribution on (0, 1).It should be noted that assuming the uniform distribution represents the DM's uncertainty about the weights.Secondly, the generated random numbers are sorted in ascending order and named as u 1 , u 2 , . . ., u n−1 .The differences between adjacent numbers in sequence Thirdly, these differences are ordered by size in descending.The outcome would be the true weights which are uniformly distributed on the weight space.
Step 4: Compute the criteria weights by the method.
Step 5: Determine the corresponding ranks of the alternatives: For the true weights and for the method, the MAV of each alternative are calculated using the MADM matrix generated in Step 2 and the weights achieved in Steps 3 and 4. Next, the alternative with the biggest MAV is placed at the first rank, one with the second biggest at the next rank, and so on.
Step 6: Compare the ranks of the alternatives by the method and by the true weights.If the method selects the identical alternative at the first rank as the true weights, then set γ = γ + 1.Moreover, compare the overall rank structure of alternatives constructed by the method and by the true weights, and for any violation between these structures set θ = θ + 1.
Step 8: Use equations ( 5) and ( 6) to calculate the overall value of the HR and RC for the method.This is the stop point of the run.
The simulation experiment was conducted with the use of a Visual Basic for application in the Excel programming language on a personal computer.The simulation runs (i.e.15000 times) were made in five rounds.Finally, the averages HR and RC of 5 rounds were considered.Calculation of the Pearson's correlation coefficients between the HR and RC data for 96 combinations showed that the performance values for the two efficacy measures HR and RC were highly correlated, with overall average of 0.9815.Hence, we employ only the HR to derive the coefficients.We chose the HR because it is easier to understand and simpler to handle.
To calculate the coefficients of the VMs, this procedure was done: First, for each combination of the number of alternatives and the number of criteria, the HR values are normalized to add up to 1.As an instance, in combination with m = 3 and n = 4, four HR values were obtained: 0.76933, 0.76353, 0.75947, and 0.70593 for the vertices (1, 0, 0, 0), 1 2 , 1 2 , 0, 0 , 1 3 , 1 3 , 1 3 , 0 , and 1 4 , 1 4 , 1 4 , 1 4 , respectively.Their normalized values are 0.25659, 0.25466, 0.25330, and 0.23545.Second, we know that for a given n, there are four sets of normalized HR values according to different levels of alternatives which are 3, 5, 7, and 10.The study showed that the trend of the four sets is fairly similar for any number of the criteria.Figure 1 depicts these trends for some number of the criteria.

Comparison
In this section, we report a set of simulation experiments that was conducted with the purpose of comparing the behaviour of the ROC method versus the IROC method.All the characteristics of this simulation scheme were like the previous simulation experiments (described in Section 2.2), unless: (I) the two methods ROC and IROC were considered to be tested simultaneously, and (II) four levels of the alternatives (m = 3, 5, 7, 10) and five levels of the criteria (n = 3, 5, 7, 10, 15) are considered.Table 2 depicts the efficacy measures data obtained from the experiments.To sum up, throughout the simulation results, we can conclude that the IROC method appears to be a better performer than the ROC method as expected.In respect to the HR, the data indicates that the IROC method outperforms the ROC method over 17 out of 20 (= 85%) cases.Among these 17 cases, in 14 cases the numerical data for the IROC method and ROC method differ only in the third decimal place, and in 3 cases (3 × 5, 5 × 15, and 10×10) the differences are even in the second decimal place.The same is true of the mean values of the HR (0.85924 for the ROC method versus 0.86026 for the IROC method).As regards the RC measure, like the HR, the IROC method is superior to the ROC method over 17 out of 20 combinations.Interestingly, in 6 cases out of these 17 combinations, the differences between the IROC method and ROC method data are even about 0.01 which is considerable in turn.In addition, the mean row of the table shows that the use of the IROC method leads to a mean RC of 0.46230, while the use of the ROC method yields a mean RC of 0.46488.In the table, the columns entitled improvement (%) are subtracting the ROC method performance value from the IROC method performance value, divided by the ROC method performance value.From this point of view, the numbers indicate an improvement of the IROC method over the ROC method up to about 1%.
Even though Table 2 obviously shows the superiority of the IROC method over the ROC method; two tests as equation ( 8) and equation ( 9) are built, the former to compare the ROC HR population mean and the IROC HR population mean, and the latter o compare the ROC RC population mean and the IROC RC population mean.
Table 2 shows that the data as for the ROC HR/RC minus the IROC HR/RC are paired.In fact, there are two samples in which each observation in one sample is paired with one observation in another sample.Hence, firstly, we employ the Shapiro-Wilk tests (Shapiro and Wilk, 1965), as seen in equation ( 10) and equation ( 11), to survey whether the HR/RC differences are normally distributed.In equation ( 10), the test statistic equals 0.9563, and the Shapiro-Wilk critical value using 99% confidence is 0.868.Because 0.9563 > 0.868, we conclude that the HR differences are normally distributed.In equation ( 11), to check the normality of the RC difference data, the Shapiro-Wilk statistic is 0.9506 that is greater than 0.868, thus at a 99% of confidence the RC differences are normally distributed.
Both the HR differences and the RC differences are normally distributed.Thus, the one-way paired t-student test is applied for the tests in equation ( 8) and equation ( 9).For equation ( 8), the t-student test statistic is calculated as −4.0986, and the critical range at 99% confidence level is T < −2.539.Because −4.0986 < −1.729, we reject null hypothesis in equation ( 8), and deduce that there is an absolutely significant difference between the two populations.In fact, the HR values in the ROC method are significantly less than that of in the IROC method.For equation ( 9), the statistic is equal to −5.1884, thus because −5.1884 < −1.729 we reject null hypothesis and put our trust in this fact that the IROC RC averages are significantly greater than the ROC RC averages.
The ROC and IROC weights for n = 2 to 15 are displayed in Table 3.The difference percentage between each two given corresponding weights is calculated as subtracting the ROC weight from the IROC weight divided by the ROC weight.Figure 3 shows how the difference percentage varies as the rank of criteria increases.For illustration, in case with 3 criteria, the ROC weights are 0.6111, 0.2778, and 0.1111, the IROC weights are 0.6086, 0.2845, and 0.1069, and the difference percentages are (0.6086 − 0.6111)/0.6111= −0.40%,+2.42% and −3.83%, respectively.The curves in the figure disclose a decrease from the ROC weight to the IROC weight for the criterion at the first rank, increase from the ROC weight to the IROC weight for some criteria at the middle ranks, and decrease from the ROC weight to the IROC weight for the criteria at the tail end ranks.

The Proposed Methodology
The procedure of the proposed methodology is briefly shown as follows: Step (A): Determine a panel of the related subject matter experts, who adequately realize the problem, and their knowledge/skills are sufficient to make proper judgments.The expert number is denoted by E (k = 1, . . ., E).
Step (B): Draw up a Criteria Breakdown Structure (CBS).This structure is made using a Delphi method or superior documents/approvals. Figure 4 represents a schematic CBS diagram.Assume that there are P parent boxes (v = 1, . . ., P ) of the criteria in the CBS.
A parent box refers a criterion which is divided into some sub-criteria.Assign numbers 1, 2, . . ., P to the parent boxes.
Step (C): Consider the 1st box of the CBS (i.e.v = 1).Step (D): Assume that n criteria (j = 1, . . ., n) are branched off from parent box v of the CBS.Ask each expert to relatively rank these criteria.Let's r kj denotes the rank proposed by kth expert for j th criterion.
Step (E): Measure the degree of consensus among the panelists using Kendall's coefficient of concordance (Kendall and Gibbons, 1990).The Kendall's coefficient ranges from 0 (no agreement) to 1 (complete agreement).To calculate the coefficient, a total rank for each criterion is firstly computed by equation ( 12).After that, the mean value of the total ranks is computed by equation ( 13).Finally, the Kendall's coefficient is defined as equation ( 14), (Singh et al., 2018).The Energy Information Administration (EIA) outlook report 2020 shows that the public transport accounts for about 25% of all energy consumption in the world.Today, countries are faced with several technologies for their public transport vehicles.These technologies, among others, are (Sperling, 1995;Morita, 2003;Tzeng et al., 2005;Patil et al., 2010;Mousaei and Hatefi, 2015;Erdogan et al., 2019;Rani and Mishra, 2020;Andersson et al., 2020;Cui et al., 2022;Abbasi and Hadji-Hosseinlou, 2022): • Diesel engines/vehicles such as conventional diesel, ultra-low-sulfur diesel, bio-diesel (e.g.vegetable oil biodiesel, and animal fat biodiesel).• Gas engines/vehicles such as Compressed Natural Gas (CNG), Liquefied Propane Gas (LPG), Liquefied Natural Gas (LNG), Dimethyl Ether (DME), Gas-To-Liquid (GTL), and hydrogen fuel cell.From the above list, some kind, e.g., conventional diesel engine/vehicle, are based on burning fossil fuels (Bhan et al., 2022), which generates carbon dioxide and other air pollutants such as unburned hydrocarbons and oxides of nitrogen, resulting in global warming and climate unwelcome changes.On the contrary, the modern technologies, e.g., exchangeable-battery electric engine/vehicle, have cleaner engines, which do not use fossil resources.Governments always need to choose the proper engine/vehicle technology to invest in and to develop in their public transport network.This challenge is often modelled as a MADM problem.In this regard, governments often have to respond the two following preliminary important questions: A. Which criteria have to be involved in the engine/vehicle selection problem?B. How much is the weight factor of each criterion?
The proposed methodology (explained in Section 3) is employed to answer the above questions.There are a number of researches related to the current case, most of them have determined list of the related criteria.Let's review some samples.Poh and Ang (1999) used Analytic Hierarchy Process (AHP) in order to evaluate the transportation fuels in Singapore.Winebrake and Creswick (2003) employed the AHP method to analyse the outlook of hydrogen-based engines for transportation systems.Tzeng et al. (2005) is a seminal work in the field of the current study case.They used Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) for the sake of determining the best alternative fuel buses compatible with urban area circumstances.Patil et al. (2010) developed a framework to model the interactions between different aspects of a transportation system, and showed up the strategies which affect decision making about engines/fuels with regard to public transport.For fuel selection in public transport, a fuzzy decision making framework was developed by Vahdani et al. (2011).Scott et al. (2012) carried out a review of those academic investigations attempting to deal with issues arising within the bioenergy, using MCDM techniques.Asilata and Keswani (2015) addressed a systematic analysis for selection of fuel by using the AHP method.Shah et al. (2017) presented an overview of available liquid and gaseous fuel, commonly used as transportation fuel in Bangladesh, and illustrated the potential of bio-CNG conversion from biogas.Oztaysi et al. (2017) concentrated on the alternative fuel selection problem of a company in the USA.They developed a multi-expert MCDM technique using Interval-Valued Intuitionistic Fuzzy Sets (IVIFS) with linguistic data.Erdogan and Sayin (2018) performed a study to choose the best fuel for the compression ignition engine.They employed the SWARA method to determine the criteria weights, and used Multi-Objective Optimization on the basis of Ratio Analysis (MULTIMOORA) to rank the selected fuels.Erdogan et al. (2019) used hybrid models SWARA-MOORA and ANP-MOORA to select the optimum fuel for the compression ignition engine/vehicle.Karasan and Kahraman (2020) made use of Interval-Valued Neutrosophic (IVN) ELECTRE I method to select renewable energy alternative for a municipality.Rani and Mishra (2020) proposed a novel decision making model based on the operators of q-Rung Ortho-Pair Fuzzy Sets (q-ROFSs), weighted aggregated sum product model, score function and similarity measure to deal with the alternative-fuel technology selection problem, wherein the decision experts and the criteria weights were completely unknown.Andersson et al. (2020) evaluated which criteria have an influence on the fuel choice between ethanol and gasoline for owners of Flex-Fuel Vehicles (FFVs) in Sweden.Major results showed that price, perceptions about quality, age and environmental attitudes influence the willingness to choose ethanol.

The Criteria List
This section reports findings of the criteria identification in the engine/vehicle selection problem.A complete criteria list is founded based upon both the published literature and the expert's judgments.We did the best to extract all the criteria reported in the relevant literature, among others, Poh and Ang (1999), Winebrake and Creswick (2003), Tzeng et al. (2005), Patil et al. (2010), Vahdani et al. (2011), Scott et al. (2012), Farkas (2014), Mousaei and Hatefi (2015), Asilata and Keswani (2015), Shah et al. (2017), Oztaysi et al. (2017), Hatefi (2018), Erdogan and Sayin (2018), Erdogan et al. (2019), Karasan and Kahraman (2020), and Rani and Mishra (2020).After that, a Delphi evaluation, using 9 related participants who were experts in the field of various engines/vehicles, was preformed to reach consensus on the criteria.In each round of the process, the respondents had to answer the questions to refine the criteria, i.e. to screen, to add, to combine, or to decompose them.We made use of the advantage of being performed by email in the Delphi process.The type of attendance meeting was not selected for the reason of Covid-19 conditions.In Table 5, let's review the final list of the criteria obtained from the above-mentioned process.This list includes main criteria (1st level) and sub-criteria (2nd level).

The Criteria Weights
A sample country is considered to be analysed using the proposed methodology.Regarding all the criteria displayed in Table 5, a decision making group including 9 ex-

Energy efficiency
The efficiency of fuel/energy used in engine.

Traffic flow speed
The average speed of vehicle for definite traffic.

Vehicle capabilities
The capability of vehicle, such as speed and slope climbing.

Environmental features Air pollution
The amount of release of pollutants into the air.

Soil pollution
The amount of release of pollutants into the soil.

Water pollution
The amount of release of pollutants to the water, such as organic pollutants, inorganic pollutants, pathogens, suspended solids, nutrients, and agriculture pollutants.

Noise pollution
The noise made by operation of engine/vehicle.

Economical features Distance to market
Average distance between the production factories and the consumption region of the related fuel.

Transportation easiness
The degree of hardness of fuel/energy transportation.

Energy storage
The rate of hardness of fuel/energy to be stored.

Internal consumption trend
The consumption trend of fuel in the region under study.

World trend
The consumption trend of fuel in the world.Specifically, the focus point of the big oil companies.

Fixed price
The fixed price of fuel/energy.

Financial features Purchase cost
The purchase cost of vehicle.

Maintenance cost
The maintenance cost of engine/vehicle.

Infrastructural features Road infrastructures
The road infrastructures required for the operation of vehicle.

Industrial infrastructures
The existent industrial infrastructures to produce engine/vehicle.Technological features Maturity of technology The maturity level of the relevant technologies.

Safety aspects
The safety features of engine/vehicle.Industrial relationships The relationship between engine/vehicle industrial system and other industrial sectors.

Social features Community acceptability
The extent to which the community's people accept vehicle.

Accessories
The accessories and other options of vehicle, in order to provide sense of comfort.Risk-based features Political risks For example, regulatory, diplomacy, and entente risks.

Economical risks
For example, inflation, rent, and sanctions risks.

Social risks
For example, risks concerned with culture, carriers, and psychology.

Technical risks
Risks and uncertainties related to technical and operational aspects, for example, maintainability.

Conclusions
This paper focused on weighting the criteria in MCDM problem.To assign the weights to the criteria, the paper concentrated on the approximate weighting approach, in which the criteria weights are estimated based on the ranks of the criteria given by the DM.The reason for this selection is this fact that in complex MCDM models, most subjective methods for eliciting the exact weights often may cause that the DMs cannot give reliable information.Although there are various approximate weighting methods in the literature, it was shown that the ROC method is still known as the best method compared with the existent methods.Notwithstanding, the paper depicted the theoretical means of the ROC method is under an unrealistic assumption, i.e. the corner weight vectors of the weight space are equal in the DM's preference.In order to resolve this drawback, as the major contribution of the paper, a different coefficient for each corner was obtained.Next, the ROC function was reformulated to involve the new coefficients.This new function was named the IROC method.Two series of simulation experiments were performed in this study.The first set of experiments was conducted to adjust the IROC parameters.By means of the second set of simulations, the improvement of the IROC decision quality than that of the ROC method was proved.A group decision making methodology was suggested to estimate the criteria weights in a breakdown structure of the criteria called CBS.This methodology benefits from the IROC method.Under a real-life study case about the engine/vehicle selection problem, the paper reviewed the respected literature to extract the criteria, and conducted a Delphi analysis to finalize the criteria register including 8 criteria at the first level and 27 criteria at the second level.Later, the proposed methodology was used to estimate the criteria weights in each level.
The current paper tried to establish default values for the IROC coefficients.A future research may focus on the extraction of these coefficients from the DM's preferences.Except for this future research direction, it is also interesting to investigate establishment of a reliable model to analytically/theoretically compare different weight approximation methods.Such a model has not been studied so far.
At the end, we hope that employing the proposed methodology helps the relevant country's DMs to take proper policies/decisions in a productive manner.

M.A.
Hatefi is an associate professor of Energy & Economics Management Department at Petroleum University of Technology (PUT).He received his BS, MSc, and PhD degrees in industrial engineering from Iran University of Science and Technology (IUST), with honor.His area of interest and researches are decision analysis, multiple criteria decision making, operations research, risk analysis, project risk management, and management information systems.He has published several journal papers and books in the mentioned areas.He was the head of Tehran Faculty of Petroleum between 2017 and 2021.He is currently serving as the manager of his department at the PUT.He is also an editorial member of some journals, such as Petroleum Business Review (PBR), and Scientific Journal of Mechanical and Industrial Engineering (SJMIE).

Fig. 1 .
Fig. 1.Typical trends of the normalized HR at various VMs.

Fig. 2 .
Fig. 2. Variations of the IROC coefficients for different number of the criteria.

Fig. 3 .
Fig. 3. Variation of the difference percentage between the ROC weights and the IROC weights.

Table 1
The default coefficients for the VMs in the weight space.

Table 2
Simulation results of the average HR and RC measures for the ROC and IROC methods.

Table 3
The weights produced by the ROC and IROC methods for n = 2 to 15.
• Blend engines/vehicles such as methanol & gasoline blend, hydrogen & CNG blend or hythane, and bio-CNG blend.• Electric engines/vehicles such as opportunity charging, direct electric charging, and exchangeable-battery electric.• Hybrid engines/vehicles such as electric & gasoline hybrid, electric & diesel hybrid, electric & CNG hybrid, and electric & LPG hybrid.

Table 5
The engine/vehicle selection criteria.