1 Introduction
Investors frequently allocate a portion of their earnings to savings to safeguard their financial future. Their goal is typically to maximize investment efficiency, leading them to pursue instruments that promise the highest possible return relative to an acceptable level of risk (Çetin
et al.,
2025). However, the intricate and dynamic nature of contemporary financial markets introduces significant uncertainty, mandating the adoption of risk-mitigation strategies. The construction of optimal portfolios is primarily guided by two historical theoretical approaches: Traditional Portfolio Theory and Modern Portfolio Theory (MPT) (Yılmaz,
2025). Traditional portfolio theory, which was dominant until the 1950s, relied on the principle of simple diversification (Deniz and Okuyan,
2018). This perspective suggested that risk could be reduced merely by assembling securities with varying characteristics and quantities, largely disregarding the correlations between them (Leković,
2021).
Modern Portfolio Theory, pioneered by Harry Markowitz in 1952, fundamentally changed this view, asserting that asset correlations are essential considerations for constructing optimal portfolios (Uyar,
2019). MPT emphasizes that simply increasing the number of assets is insufficient to reduce risk; instead, incorporating securities with low correlation significantly enhances diversification benefits (Wysocki and Sakowski,
2022). Markowitz’s work, which presented the Mean–Variance Optimization (MVO) model, established the foundation for modern portfolio theory (Curtis,
2004).
MVO instructs investors to select the portfolio with the lowest variance (risk) from all portfolios capable of meeting a specified return target. Portfolios exhibiting higher variance are, by definition, deemed inefficient (Kolm
et al.,
2014). Despite its theoretical importance, MVO faces practical issues, including high sensitivity to estimation errors and a dependence on extensive historical data series (Lorenzo and Arroyo,
2023). These limitations frequently lead to unstable or suboptimal results, particularly in scenarios involving a high-dimensional set of assets. These inherent weaknesses have spurred researchers to develop alternative optimization models, such as the Arbitrage Pricing Model (Ross,
1976), the Mean Absolute Deviation (MAD) method (Konno and Yamazaki,
1991), the Black–Litterman model (Black and Litterman,
1991), the Capital Asset Pricing Model (CAPM) (Sharpe,
1963; Lintner,
1965; Mossin,
1966), and the Fama–French Three-Factor Model (Fama and French,
1993).
Multi-Criteria Decision Making (MCDM) methods are now extensively used in portfolio formation, as they facilitate the concurrent assessment of multiple criteria in complex decision landscapes. Furthermore, heuristic algorithms and artificial intelligence (AI)-based approaches are frequently preferred due to their demonstrated capacity to manage large datasets efficiently. The Hierarchical Risk Parity (HRP) algorithm is one such machine-learning-based methodology.
Introduced by De Prado (
2016), the HRP algorithm was specifically engineered to overcome the deficiencies of conventional optimization techniques. HRP merges risk-based allocation with hierarchical clustering, effectively mitigating the instability, concentration, and underperformance issues commonly associated with quadratic optimization. It enables portfolio construction using only a single covariance matrix (Kaae
et al.,
2022). However, subsequent analyses have identified two critical limitations in HRP: the chaining problem and the inability to determine the optimal number of clusters (Kaae
et al.,
2022).
The core purpose of this research is to design a novel hybrid portfolio optimization algorithm that directly addresses both the chaining problem and the challenge of optimal cluster determination within the HRP framework. Our approach employs a comprehensive three-stage methodology. The initial phase involved a literature review focusing on HRP, the MEREC (Method based on the Removal Effects of Criteria) technique, and the WEDBA (Weighted Euclidean Distance Based Approach) method. The second phase executed portfolio optimization using seven distinct linkage methods (Ward, single, complete, average, weighted, centroid, and median) within the HRP structure. The third phase incorporated the Elbow method into the HRP algorithm specifically to resolve the chaining and optimal cluster identification issues. This analysis was further enriched by the HRP-MCDM approach. Finally, we conducted a rigorous comparison between the performance of portfolios generated by the standalone HRP algorithm and those from the hybrid HRP-MCDM approach to thoroughly evaluate their relative effectiveness.
This study seeks to provide substantial insights into enhancing clustering techniques and refining decision-making processes in financial management, thereby making a significant contribution to the portfolio optimization field. The HRP, MEREC, and WEDBA methods were selected for this study because their respective strengths are complementary and highly suitable for overcoming the limitations of traditional portfolio optimization models. The HRP algorithm was chosen over conventional mean-variance models because it reduces concentration risk, avoids covariance matrix inversion, and yields more stable weight allocations even when financial data is noisy. However, given HRP’s acknowledged weaknesses—specifically the chaining problem and the lack of a mechanism for optimal cluster determination—additional methodological support is essential. Consequently, the MEREC method was included as an objective and data-driven weighting technique. It calculates the influence of each criterion without relying on subjective expert judgment, making it superior to many subjective or hybrid weighting methods used in MCDM. Similarly, the WEDBA method was preferred over other ranking techniques because it assesses alternatives based on weighted Euclidean distances to ideal and non-ideal solutions, allowing for a more robust performance evaluation within the HRP-derived clusters. Together, these three methods establish a powerful hybrid framework that merges machine-learning-based risk clustering with objective multi-criteria evaluation, providing a solution that is more resilient and practically applicable than any of the individual components used in isolation.
3 Methodology
This research introduces a novel hybrid portfolio optimization model that integrates the HRP algorithm with MCDM methods for the analysis of stocks listed on the BIST 100 index. The core objective is two-fold: to significantly optimize risk distribution across the resulting portfolios and to evaluate firms’ financial performance, thereby yielding actionable, practical recommendations for investors. This integrated framework proceeds through three sequential, critical phases:
-
• Structural Determination: The first stage involves establishing the most suitable cluster configuration by incorporating the Elbow method directly into the HRP algorithm, which resolves the challenge of determining the optimal number of clusters.
-
• Performance Assessment: Next, an objective, data-driven financial performance evaluation is executed within every cluster established by HRP, utilizing both the MEREC and WEDBA multi-criteria decision methods.

Fig. 2
Flowchart of proposed approach.
-
• Portfolio Finalization: The third and final stage involves constructing the definitive portfolios by merging the HRP-derived weights (which reflect risk parity) with the MCDM-based rankings (which reflect financial performance).
The overall workflow of the proposed methodology is presented in Fig.
2. The proposed model underwent empirical testing using BIST 100 stock data spanning the 2018–2022 period. This analysis utilized seven distinct linkage method to ensure robustness. The process began with the initial phase, where the Elbow method was incorporated into the HRP framework. This direct integration was crucial for effectively mitigating the chaining effect and resolving the difficulty in determining the optimal number of clusters. Consequently, this enhancement allowed the HRP algorithm to construct risk clusters with greater accuracy and proportionality than the standard approach. The methodology then transitioned to the second phase, implementing the HRP-MCDM integrated structure. Here, the optimally structured clusters obtained via HRP were rigourously evaluated using both the MEREC and WEDBA methods. This step was essential for reliably identifying stocks with the highest potential performance within each established risk-based cluster. In the third and concluding phase, a direct comparative assessment was conducted. Portfolios generated using the standalone HRP method were benchmarked against those constructed using the hybrid HRP-MCDM approach. This comparison was designed to evaluate the relative effectiveness and superior performance of the integrated strategy.
Overall, the developed integrated framework successfully empowers investors to manage portfolio risk more effectively while simultaneously improving the quality of their financial performance evaluation. The synergistic combination of HRP’s inherent risk-based clustering capabilities with the multidimensional, objective evaluation features of MCDM methods provides a unique, practical, and valuable contribution to the field of portfolio optimization literature.
3.1 Hierarchical Risk Parity
Research in the field of finance consistently shows that investors fundamentally rely on two core strategies when constructing their portfolios. The first, rooted in traditional financial thought, emphasizes simple diversification. This traditional view operates on the principle that risk can be mitigated merely by ensuring assets are not excessively concentrated within a single group. The second, and more profound, approach originates from Harry Markowitz’s seminal 1952 work, which laid the groundwork for what is commonly known as Modern Portfolio Theory (MPT), the mean-variance framework, or Markowitz portfolio optimization. MPT fundamentally asserts that portfolio risk is best minimized not just by diversifying holdings, but by simultaneously considering the precise interrelationships (correlations) among those assets. Following Markowitz’s pioneering contribution, portfolio optimization problems have conventionally been formulated as quadratic programming models that map the risk–return profiles of various financial instruments (Pfitzinger and Katzke,
2019). Given that this framework became the cornerstone of contemporary portfolio optimization, exhaustive research has been dedicated to mapping the portfolios located on the “efficient frontier”. This frontier represents the optimal set of portfolios that either maximize expected return for a predefined level of risk or minimize risk for a target return level (Nourahmadi and Sadeqi,
2021; Reis
et al.,
2023). Portfolios situated on this boundary are, by definition, deemed optimal.
Despite its enduring presence and theoretical importance, the practical application of MPT has introduced significant challenges. The concentration problem remains a persistent issue, often manifesting as optimal portfolios that allocate disproportionately large weights to a limited subset of assets. This drawback has prompted numerous studies to propose alternative mitigation strategies. Furthermore, as practitioners integrate an increasing number of assets into portfolios, the escalating complexity of inter-asset correlations necessitates deeper diversification, a process that ironically can lead to unstable solutions. De Prado (
2016) famously dubbed this instability and pressure for excessive diversification the “Markowitz Curse” (De Prado,
2016; Uyar,
2019; Kaae
et al.,
2022).
To bypass the deficiencies linked to quadratic optimization – such as instability, susceptibility to estimation errors, and concentration – De Prado (
2016) introduced the Hierarchical Risk Parity (HRP) algorithm. HRP is a heuristic methodology explicitly designed to circumvent the practical limitations of MPT (De Prado,
2016; De Lio Pérego,
2021). It draws on concepts from mathematical modelling, graph theory, and machine learning, relying on hierarchical clustering to mirror the hypothesized layered structure inherent in financial markets. Crucially, HRP departs from the conventional assumption of direct pairwise relationships, acknowledging that many assets are only indirectly connected through a chain of intermediate relationships (De Lio Pérego,
2021).
A substantial benefit of the HRP framework is its ability to eliminate the requirement to invert the covariance matrix. This matrix inversion is one of the primary drivers of estimation error in optimization, making HRP a more stable alternative (Sjöstrand
et al.,
2020; Reis
et al.,
2023). By utilizing a hierarchical clustering structure derived from machine learning, HRP successfully addresses the inherent weaknesses of conventional risk-based optimization methods (Jain and Jain,
2019). The algorithm functions by iteratively assigning inverse-volatility weights to clusters of similar assets, recursively subdividing these clusters until each individual asset is isolated. This hierarchical process typically yields more stable portfolios that frequently demonstrate superior performance compared to those produced under MPT (Lagowski,
2022).
HRP offers several clear advantages over the Markowitz model. While Markowitz-based portfolios are prone to assigning heavy weights to a limited number of assets, HRP achieves a more uniform distribution of risk through hierarchical diversification (Sen,
2023). Furthermore, HRP applies inverse-variance weighting only within groups of correlated assets, and by narrowing these groups through recursive subdivision, it assigns weights solely among assets within the same cluster, rather than across the entire portfolio (Pfitzinger and Katzke,
2019; Jain and Jain,
2019; Bechis,
2020). HRP also exhibits greater temporal stability than the Markowitz model, which is highly susceptible to abrupt allocation shifts triggered by minor changes in market conditions. HRP’s foundation in hierarchical structure makes it notably less sensitive to short-term volatility, resulting in smoother and more reliable allocation patterns over time (Sen,
2023).
3.1.1 Tree Clustering
In the first stage of the HRP algorithm, tree clustering is performed. The steps to be performed in tree clustering, which is the first stage of the HRP algorithm, are explained in detail below (De Prado,
2016; Uyar,
2019; Lorenzo and Arroyo,
2023).
In the tree clustering stage, a matrix of size
$T\times N$ with
N variables over the time period
T is first created. For each series, which is determined as
N column-vectors, a clustering structure is to be created. For this process, a correlation matrix is first created in the form
$\rho ={\{{p_{i,j}}\}_{i,j=1\dots N}}$,
$N\times N$ with each element
${\rho _{i,j}}=\rho [{X_{i}},{X_{j}}]$. The correlations are then used to calculate the distance measure between the series using equation (
1).
The expression B in this equation refers to the cartesian product of the set $\{1,\dots ,i,\dots ,N\}$. By calculating the distance measure, the distance matrix $X={\{{d_{i,j}}\}_{i,j=1\dots N}}$ of size $N\times N$ is created, which is to be used in the clustering analysis. In the calculations, the X matrix is non-negative $(d[X,Y])$, coincidence $(d[X,Y]=0\Leftrightarrow X=Y)$, symmetrical $(d[X,Y]=d[Y,X])$, sub-additivity $(d[X,Z]=d[X,Y]+d[Y,Z])$.
The second step of tree clustering is to calculate the Euclidean distances between the columns of the matrix. Equation (
2) is used for this process.
In this equation, ${d_{i,j}}$ is defined as the elements of the matrix X, while ${\check{d}_{i,j}}$ is defined as the elements of the matrix D.
In the third step of the tree clustering phase, the clustering algorithm is applied. In this step, each pair of columns is paired to
${i^{\ast }}$ and
${j^{\ast }}$ clusters and transformed according to the linkage method to obtain the matrix
u [1]. There are seven linkage methods in the literature: single, complete, average, weighted, centroid, Ward and median. These linkage methods are explained below (Uyar,
2019).
The so-called single linkage method was developed by Sneath in 1957 in the Nearest Point Algorithm and is the proximity of the closest elements of two clusters (Rajabi
et al.,
2020; Nanakorn and Palmgren,
2021). The single linkage method is calculated using equation (
3).
In the complete linkage method, which was developed by Sorensen in 1948 and is known as the furthest point or Voor Hess algorithm, the similarity between two clusters is the proximity of the most distant elements of the two clusters, in contrast to the single linkage method (Rajabi
et al.,
2020). The complete linkage method is calculated using equation (
4).
The Unweighted Pair Group Method with Arithmetic mean (UPGMA) algorithm developed by Sokal and Michener in 1958, which is also known as the mean linkage method (average), determines the similarity between two clusters by averaging the distance of all elements in two clusters to the other cluster elements (Rajabi
et al.,
2020). The average linkage method is calculated using equation (
5). The expressions
$|{i^{\ast }}|\hspace{2.5pt}\text{and}\hspace{2.5pt}|{j^{\ast }}|$in this equation represent the number of times each cluster belongs to each cluster.
The Weighted Pairwise Grouping Method with Arithmetic Mean (WPGMA), developed by McQuitty in 1966 and also known as the weighted linkage method, is calculated using the following equation (
6). The
${i^{\ast }}$cluster in this equation is reconstructed using the
s and
t clusters and weighted again using this new group of
${j^{\ast }}$ clusters.
The centroid developed by Sokal and Michener in 1958, also known as UPGMC (Unweighted Pair Group Method using Centroids), is calculated using equation (
7). In this equation,
${\vec{c}_{i}}$ and
${\vec{c}_{j}}$ represent the centroids of
c clusters.
The Ward connectivity method developed by Ward in 1963, also known as the Stepwise Algorithm, is calculated using equation (
8).
In 1967, Gower developed the algorithm Weighted Pair Group Method using Centroids (WPGMC), also known as the median connectivity method, which is calculated using equation (
9). The expressions
${\overrightarrow{\omega }_{i}}$ and
${\overrightarrow{\omega }_{j}}$ in this equation indicate the median values of
${i^{\ast }}$and
${j^{\ast }}$ clusters.
After the tree clustering process has been carried out, a “linkage matrix” of the size
$(N-1)\times 4$ and the form
$Y={\{({y_{m,1}},\hspace{2.5pt}{y_{m,2}},\hspace{2.5pt}{y_{m,3}},\hspace{2.5pt}{y_{m,4}})\}_{m=1,\dots ,N-1}}$ is obtained. The resulting linkage matrix
$(Y)$ contains four linkage data for each cluster. In the matrix, the “
${y_{m,1}},\hspace{2.5pt}{y^{\prime\prime }_{m,2}}$ elements represent the component data; the
${y_{m,3}}$ element represents the distance between the components
$({y_{m,3}}={\tilde{d}_{{y_{m,1}},{y_{m,2}}}})$;
${y_{m,4}}$ and other elements
$({y_{m,4}}\leqslant N)$ represent the original number of connected elements (Uyar,
2019).
3.1.2 Quasi-Diagonalization
The second stage of the HRP algorithm is quasi-diagonalization. In quasi-diagonalization, the largest values in the rows and columns of the covariance matrix are arranged along the diagonal. In this process, assets with high covariance are placed together, while assets with low covariance are placed far apart. The process to be performed in the semi-diagonalization phase groups all elements of the link matrix as recursive
${y_{N-1,1}}$,
${y_{N-1,2}}$ (De Prado,
2016).
Fig.
3 shows the state of the entities before the quasi-diagonalization process. Fig.
4 shows the state of the covariance matrices after the second step of the HRP algorithm, quasi-diagonalization. The colours in both matrices represent high and low covariances. Since the darker colours here have higher covariances, these units are concentrated around the diagonal.

Fig. 3
Unclustered correlation matrix (Bechis,
2020).

Fig. 4
Clustered correlation matrix (Bechis,
2020).
3.1.3 Recursive Bisection
Recursive bisection is the last stage of the HRP algorithm. This stage is extremely important, as it is the stage where the final weights of the securities in the portfolio are determined (Bechis,
2020). In this stage, the weights of the securities are determined using a tree structure and inverse variance (Nanakorn and Palmgren,
2021). The steps to be followed in the recursive bisection phase can be listed as follows (De Prado,
2016):
-
1) Before starting to the algorithm, some definitions can be made as:
-
a) List of items are set as: $L=\{{L_{0}}\}$ ve ${L_{0}}={\{n\}_{n=1,\dots ,N}}$,
-
b) A unit weight to all items is assigned: ${w_{1}}=1$, $\forall n=1,\dots ,N$.
-
2) If $|{L_{i}}|=1,\hspace{2.5pt}\forall {L_{i}}\in L$, then stop.
-
3) For each $\forall {L_{i}}\in L$ such that $|{L_{i}}|\gt 1$:
-
a) ${L_{i}}$ is bisected into two subsets, ${L_{i}^{(1)}}\cup {L_{i}^{(2)}}={L_{i}}$, here $|{L_{i}^{(1)}}|=\operatorname{int}\big[\frac{1}{2}|{L_{i}^{(1)}}|\big]$, and the order is maintained.
-
b) The variance of ${L_{i}^{(j)}}$, $j=(1,2)$ is defined, as the quadratic form ${\tilde{V}_{i}^{(j)}}\equiv {\tilde{w}_{i}^{{(j)^{\prime }}}}{V_{i}^{(j)}}{\tilde{w}_{i}^{(j)}}$, where ${\tilde{V}_{i}^{(j)}}$ is the covariance matrix between the constituents of the ${L_{i}^{(j)}}$ bisection, and ${\tilde{w}_{i}^{(j)}}=\text{diag}{[{V_{i}^{(j)}}]^{-1}}\frac{1}{\text{tr}[\text{diag}{({V_{i}^{(j)}})^{-1}}]}$ where $\text{diag}[.]$ and $\operatorname{tr}[.]$ are the diagonal and trace operators.
-
c) Compute the split factor: ${\alpha _{i}}=1-\frac{{\tilde{V}_{i}^{(1)}}}{{\tilde{V}_{i}^{(1)}}+{\tilde{V}_{i}^{(2)}}}$, so that $(0\leqslant \alpha \leqslant 1)$.
-
d) ${w_{n}}$ allocations are re-scaled by a factor of ${\alpha _{i}}$, $\forall n\in {L_{i}^{(1)}}$.
-
e) ${w_{n}}$ allocations are re-scaled by a factor of $(1-{\alpha _{i}})$, $\forall n\in {L_{i}^{(2)}}$.
-
4) Loop to step 2.
In the subsequent phase of recursive bisection, the last phase of the HRP algorithm, the portfolio weights obtained from the previous phase are divided into two separate parts, where these weights must be between zero and one
$(0\leqslant {w_{i}}\leqslant 1)$ and the sum of the weights must equal one (
$\forall i=1,\dots ,N$ ve
${\textstyle\sum _{i=1}^{N}}{w_{i}}=1$) (De Prado,
2016; Uyar,
2019). In the third and final step of the quasi-diagonalization phase, the weighting values are determined for all assets that the investor has included in his portfolio.
De Prado (
2016) attempts to close the gap in the literature by solving the problems of the Markowitz model such as instability, concentration problems and underperformance with the HRP algorithm and creating portfolios that can outperform traditional risk-based portfolio strategies. However, the HRP algorithm also has some drawbacks. These disadvantages can be explained as follows Sen (
2023):
-
• With a large number of entities, the hierarchical clustering algorithm used by the HRP algorithm can be very time-consuming.
-
• The HRP algorithm does not explicitly consider the expected return on assets. The HRP algorithm merely attempts to balance the risk across all assets in the portfolio. This leads to lower expected returns than with other portfolio optimization methods.
-
• The HRP algorithm assumes that the returns of the assets follow a normal distribution. Since asset returns are not normally distributed in real life, this can affect the accuracy of the HRP algorithm.
-
• Since the HRP algorithm is a relatively new method, portfolio performance has not been comprehensively tested over a long period of time. Therefore, the HRP algorithm may lead to unforeseen risks and limitations.
3.2 Elbow Method
The Elbow method is a commonly recognized heuristic for determining the optimal number of clusters (k) within a given dataset. The fundamental principle underpinning this technique is that the ideal cluster count corresponds to the point where introducing further clusters results in only a marginal, negligible improvement in the model’s performance or the variance explained by the clustering structure.
This methodology relies heavily on a graphical interpretation of the results. To apply it, the proportion of variance accounted for by the clustering is plotted against different values for k. Initial clusters naturally capture a significant amount of the dataset’s inherent information. However, as k continues to increase, the incremental benefit—the addition to the explained variance—experiences a sharp decline, creating a distinct “elbow” shape on the graph. This inflection point is interpreted as the most appropriate number of clusters, signifying the optimal value for k.
Implementation of the Elbow method is systematic: the value of k is typically initiated at two and is increased sequentially. At each step, clusters are computed, and a corresponding cost metric (often a measure of within-cluster variance) is calculated. While this cost metric decreases substantially for lower k values, it inevitably reaches a plateau as k continues to rise. The point at which this levelling-off begins effectively signals the optimal cluster size; beyond this threshold, additional clusters offer diminishing returns, as they become increasingly similar to existing ones, thus contributing little to the overall model’s explanatory power.
The Elbow method is a widely used approach for identifying the optimal number of clusters or groups within a dataset (Kodinariya and Makwana,
2013; Bholowalia and Kumar,
2014; Abrar
et al.,
2023). The fundamental principle of this method is that the ideal number of clusters corresponds to the point beyond which adding an additional cluster yields only marginal improvement in model performance (Bholowalia and Kumar,
2014).
3.3 MEREC Method
The MEREC method is one of the objective methods for criteria weighting developed by Keshavarz-Ghorabaee
et al. (
2021). In contrast to other criteria weighting methods, the MEREC method focuses on the changes in the overall weighting of the criteria by excluding the relevant criterion when calculating the importance of each criterion (Keshavarz-Ghorabaee
et al.,
2021).
The implementation steps of the MEREC method are shown in equations (
10)–(
18) (Keshavarz-Ghorabaee
et al.,
2021; Keshavarz-Ghorabaee,
2021).
Step 1: The first step of the MEREC method is to create the decision matrix. The decision matrix, which consists of
m alternatives and
n criteria, is created using equation (
10).
In the decision matrix in equation (
10),
${x_{ij}}$ shows the value of alternative
i in relation to criterion
j. This value should be greater than zero. If there is a negative value
${x_{ij}}$ in the decision matrix, it should be converted to a positive value using appropriate methods. In this study, negative values were converted to positive values using the Z-score standardization transformation. Equation (
11) and equation (
12) were used for the Z-score standardization transformation (Zhang
et al.,
2014; Ayçin and Güçlü,
2020; Kundakcı and Arman,
2023).
In equation (
12), the value of
$A\hspace{2.5pt}\text{stands for}\hspace{2.5pt}\gt |\min {z_{ij}}|$.
Step 2: The decision matrix is normalized using equation (
13) for benefit criteria and equation (
14) for cost criteria.
The normalization process is similar to the process used in methods such as WASPAS but differs in that it switches between formulas for benefit and cost criteria. In contrast to many other studies, the MEREC method converts all criteria into minimization-type criteria (Keshavarz-Ghorabaee
et al.,
2021).
Step 3: A logarithmic measure with equal criteria weighting is used to determine the overall performance values of the alternatives. This logarithmic measure is based on a non-linear function shown in Fig.
5.

Fig. 5
The Weights of the comparative analysis (Keshavarz-Ghorabaee
et al.,
2021).
Using the values obtained with the normalization matrix, it can be ensured that smaller values of
${x_{ij}^{\ast }}$ result in a larger overall performance value. This value is calculated using equation (
15).
Step 4: As with the determination of the overall performance value of the alternatives, a logarithmic criterion is also used in this step. In contrast to the previous step, in this step the performance values of the alternatives are calculated by subtracting them individually for each criterion. These values are calculated using equation (
16).
Step 5: The
${E_{j}}$ value, which is determined with the help of equation (
15) and equation (
16) and indicates the removal effect of the
j. criterion, is calculated for each criterion with the help of equation (
17).
Step 6: Using the
${E_{j}}$ values calculated by equation (
17), the importance weights of each criterion are calculated by equation (
18).
3.4 WEDBA Method
The WEDBA method, introduced by Rao and Singh (
2012), is an MCDM technique grounded in the principle that the most favourable alternative is the one with the shortest Euclidean distance to the ideal solution, whereas the least desirable alternative is the farthest from the non-ideal solution (Rao and Singh,
2012). In this framework, the overall performance index of each alternative is computed based on its Euclidean distance to both the ideal and all non-ideal reference points. This structure ensures that alternatives are evaluated relative to ideal and non-ideal benchmarks rather than being compared directly with one another, thereby necessitating the use of Euclidean distance as the primary measure (Rao and Singh,
2012).
The procedural steps of the WEDBA method are presented in equations (
19)–(
28) (Rao and Singh,
2012; Işık,
2021).
Step 1: The first step of the WEDBA method is to construct the decision matrix. The decision matrix is created using equation (
10).
Step 2: The decision matrix is normalized using equation (
19) for benefit criteria and equation (
20) for cost criteria.
Step 3: The normalized values obtained by equation (
19) and equation (
20) are standardized by equation (
21).
The expressions
${\mu _{j}}$ and
${\sigma _{j}}$ in equation (
21) indicate the mean and standard deviation of criterion
j. respectively. The value of
${\mu _{j}}$ is calculated using equation (
22) and
${\sigma _{j}}$ is calculated using equation (
23).
Step 4: Ideal values are calculated using equation (
24) and anti-ideal values using equation (
25).
Step 5: The weighted Euclidean distances of the alternatives to the ideal points are calculated with the help of equation (
26) and the weighted Euclidean distances to the non-ideal points are calculated by using equation (
27).
Step 6: Equation (
28) is used to calculate the index score, which is the last step of the WEDBA method.
An increase in an alternative’s index value indicates that it is closer to the ideal solution. Therefore, the alternative with the highest index value is the best in terms of performance, while the one with the lowest index value is the weakest alternative.
3.5 Portfolio Performance Evaluation Criteria
To compare the portfolios created using the HRP algorithm and the MCDM methods, evaluation criteria are needed. Although there are many alternatives in the literature for comparing portfolio performance, the Sharpe ratio, also known as the reward to volatility ratio based on standard deviation, is preferred in this study. The Sharpe ratio, which helps investors understand how well their investments are compensated for the risk they bear, is calculated using equation (
29) (Samarakoon and Hasan,
2006; Bechis,
2020).
${S_{p}}$: Sharpe ratio,
${r_{p}}$: Portfolio return,
${r_{f}}$: Risk-free interest rate,
${\sigma _{p}}$: Represents the standard deviation of the portfolio.
Diversifying the portfolio with securities with low or negative correlation increases the Sharpe ratio as it reduces the risk of the portfolio in general (Srivastava and Mazhar,
2018). In other words, a high portfolio return, and a low standard deviation increase the Sharpe ratio, while a low portfolio return and a high standard deviation decrease the Sharpe ratio. Therefore, among alternative investment portfolio strategies, investors should favour the portfolio with a high Sharpe ratio.
4 Application
The primary objective of this research is to evaluate firms’ financial performance and generate actionable insights for investors by leveraging the Hierarchical Risk Parity (HRP) algorithm—a machine learning-based portfolio optimization technique introduced by De Prado (
2016)—in conjunction with Multi-Criteria Decision-Making (MCDM) methods.
To facilitate this analysis, we obtained the daily closing prices for all companies listed on the BIST 100 index, covering the 2018–2022 period, with data retrieved from the Bloomberg Terminal. The BIST 100 was specifically chosen because it encapsulates Turkey’s 100 largest corporations in terms of both market capitalization and trading volume. Furthermore, it functions as the central barometer for overall market dynamics in the Turkish economy, serving as the benchmark index for the BIST Equity Market and the foundation for the international MSCI Türkiye Index. Appendix
A provides a complete list of the companies included in the analysis and their respective BIST 100 codes (Table
A1).
Following the definition of the dataset and research scope, the empirical analysis was structured and executed in three distinct stages:
Stage 1: HRP Implementation. Portfolio optimization was initially conducted using the standalone HRP algorithm, applied to the daily closing price data of the BIST 100 stocks. The algorithm was implemented using Python, and seven distinct linkage method—namely Ward, single, complete, average, weighted, centroid, and median—were employed throughout the clustering process.
Stage 2: Integrated HRP–MCDM Framework. This stage represents one of the study’s core novel contributions: the application of the integrated HRP–MCDM framework. To ensure a robust and transparent financial performance evaluation, the criteria themselves were established using the Delphi method. This was done through consultation with five portfolio management experts, triangulated with evidence drawn from the existing literature and relevant reports published by the Central Bank of the Republic of Türkiye (TCMB). This systematic and expert-informed selection process is intended to yield a meaningful contribution to current research. The specific criteria utilized in this evaluation are summarized in Table
4.
Table 4
Criteria used in the study.
| Criteria |
Method of calculation |
Criteria type |
References |
| Current Ratio |
Current assets/current liabilities |
Benefit |
Katrancı et al. (2025), Ertuğrul and Karakaşoğlu (2009) |
| Liquidity Ratio |
(Current assets-inventories)/current liabilities |
Benefit |
Ertuğrul and Karakaşoğlu (2009) |
| Leverage Ratio |
(Long-term liabilities-current liabilities)/total assets |
Cost |
(Farrokh et al., 2016) |
| Inventory Turnover Ratio |
Cost of goods sold/average inventory |
Benefit |
Moghimi and Anvari (2014); Katrancı et al. (2025) |
| Receivables Turnover Ratio |
Total net sales/accounts receivables |
Cost |
Moghimi and Anvari (2014); Katrancı et al. (2025) |
| Return on Equity (ROE) |
Net income/common equity |
Benefit |
Baydaş and Pamucar (2022) |
| Return on Assets (ROA) |
Net income/total assets |
Benefit |
Karadeniz and İskenderoğlu (2011) |
The financial assessment relies on several critical ratios, each offering a distinct perspective on a firm’s operational efficiency, solvency, and profitability.
The Current Ratio is a foundational measure of a firm’s capacity to satisfy its short-term financial obligations, thus acting as a central indicator of working capital adequacy. It is calculated simply by dividing current assets by current liabilities. Companies whose current ratios align closely with the relevant industry benchmark are typically perceived as financially robust, whereas lower ratios signal a potentially increased liquidity risk.
The Liquidity Ratio, often known interchangeably as the acid-test ratio or quick ratio, offers a more immediate view of a company’s solvency position and overall financial resilience. This ratio specifically measures the firm’s ability to cover all its short-term liabilities using only its most liquid assets. The calculation is performed by deducting inventories from current assets and then dividing the remainder by current liabilities.
The Leverage Ratio quantifies the extent to which a company’s total assets are financed through external debt. Consequently, it reflects the firm’s capacity to service both its short-term and long-term financial obligations. It is derived by dividing total liabilities (encompassing both current and non-current debt) by total assets. A higher ratio suggests that a greater proportion of the firm’s asset base is reliant on debt funding.
The Inventory Turnover Ratio is a key indicator of supply chain and operational efficiency, showing how frequently a firm manages to replenish and sell off its inventories over a specified reporting period. It is obtained by dividing the cost of goods sold by the average inventory balance. In most cases, higher turnover rates imply effective inventory management and a relatively rapid conversion cycle from stock to sales.
The Receivables Turnover Ratio is a critical metric for evaluating the effectiveness of a company’s credit collection processes. It measures how efficiently a company collects revenue from its credit sales within a defined time frame. The ratio is computed by dividing net sales by the average balance of trade receivables. A high turnover rate—which should always be interpreted in light of the firm’s specific credit policies—suggests strong collection performance, while consistently lower values may signal collection inefficiencies or potential credit risk issues.
Return on Equity (ROE) directly reflects the profitability generated for shareholders relative to the capital they have invested in the firm. Calculated by dividing net profit by total equity, ROE is a vital metric for current and prospective investors, as well as for corporate strategists. Significantly higher ROE values are typically interpreted as signaling robust value creation, thereby enhancing the firm’s overall attractiveness for investment.
Return on Assets (ROA) assesses the efficiency with which a company utilizes its entire asset base to generate earnings. This profitability measure is calculated by dividing net profit after tax by the average total assets for the corresponding fiscal period. A high ROA value indicates a superior and more efficient use of the company’s assets in generating profit.
Stage 3: The comparative performance of portfolios constructed using both the standalone HRP algorithm and the hybrid HRP-MCDM framework was meticulously assessed through the calculation and analysis of Sharpe ratios.
4.1 Clustering with HRP
In this study, portfolios were systematically constructed based on the Ward, single, complete, average, weighted, centroid, and median linkage methods within the HRP algorithm. The analysis utilized the daily closing prices of stocks listed in the BIST 100 Index for the period spanning January 2018 to December 2022. The dataset used in this study consists of the daily closing prices of BIST 100 Index stocks and is available for open access at
https://zenodo.org/records/18140067.
The data acquisition and initial processing were conducted using the Python programming language. Yahoo Finance was selected as the primary data source due to its extensive coverage of major global indices (such as the Dow Jones, NASDAQ, and S&P 500) and its proven reliability for empirical finance research. However, because comprehensive historical data for all 100 BIST constituents were not fully available throughout the 2018–2022 period, the final, clean dataset was limited to 72 stocks for which complete information could be retrieved.
In strict adherence to the HRP framework, portfolio weights were precisely calculated using the following equations for each linkage method: equation (
3) for the single linkage method, equation (
4) for the complete linkage method, equation (
5) for the average linkage method, equation (
6) for the weighted linkage method, equation (
7) for the centroid linkage method, equation (
8) for the Ward linkage method, and equation (
9) for the median linkage method. The resulting final portfolio allocations are comprehensively detailed in Table
5.
Table 5
Portfolio results according to HRP algorithm.
|
Average return |
Maximum return |
Minimum return |
Range |
Standard deviation |
Skewness |
Kurtosis |
Sharpe ratio |
| Single |
0.001601 |
0.062471 |
−0.102362 |
0.164833 |
0.015062 |
−1.624831 |
8.214356 |
%10.63 |
| Complete |
0.001566 |
0.061674 |
−0.102188 |
0.163861 |
0.015050 |
−1.629962 |
8.228505 |
%10.41 |
| Average |
0.001574 |
0.061440 |
−0.102709 |
0.164149 |
0.014969 |
−1.632317 |
8.374082 |
%10.51 |
| Centroid |
0.001582 |
0.061115 |
−0.102220 |
0.163335 |
0.014852 |
−1.629135 |
8.260917 |
%10.65 |
| Median |
0.001576 |
0.061519 |
−0.102424 |
0.163943 |
0.014940 |
−1.645647 |
8.352952 |
%10.55 |
| Weighted |
0.001592 |
0.062312 |
−0.102403 |
0.164715 |
0.015051 |
−1.615115 |
8.159211 |
%10.58 |
| Ward |
0.001562 |
0.063521 |
−0.102474 |
0.165994 |
0.015076 |
−1.601175 |
8.074427 |
%10.36 |
Table
5 presents a comprehensive overview of the portfolio outcomes generated by applying the standalone HRP algorithm to the BIST 100 Index data.
An initial review of the results reveals that portfolios constructed using the single and centroid linkage methods exhibited the most robust performance, delivering the strongest results in terms of both average returns and Sharpe ratios (a key risk-adjusted performance metric).
When the portfolios are assessed solely based on risk, as measured by standard deviation, the Ward linkage method resulted in the highest recorded level of volatility. Conversely, the centroid linkage method was the most effective in reducing risk, producing the lowest volatility across all tested criteria.
Further examination of the return distribution statistics indicates that the skewness values for all linkage approaches are left-skewed relative to a standard normal distribution. This finding suggests a financial reality where there is a greater concentration and frequency of negative returns than positive ones. Regarding kurtosis (a measure of the”tailedness” of the distribution), the highest values were observed in portfolios formed using the average and median linkage methods. The lowest kurtosis values, however, were associated with the weighted and Ward linkage approaches. These results imply that portfolios constructed with the weighted and Ward linkage methods exhibit a higher probability of experiencing extreme price fluctuations (both large gains and large losses) compared to those created using the other clustering criteria.
4.2 Determining the Number of Clusters with Elbow
Since the introduction of the HRP algorithm by De Prado (
2016), various researchers have successfully identified several inherent shortcomings and weaknesses. The most significant of these practical issues are the concatenation problem and the algorithm’s inability to determine the optimal number of clusters.
The chaining problem stems from the sensitivity of HRP’s linking methods to specific data values. This sensitivity can cause clusters to become unduly long, distributed, and heterogeneous, resulting in groupings that contain assets which, based on their risk profile, should not logically be clustered together. This situation is highly undesirable for both researchers and investors, as the addition of a seemingly minor asset can disproportionately alter a cluster’s variance. Such structural instability leads to distorted asset weighting and potentially erroneous investment decisions.
The second major issue arises when the optimal number of clusters cannot be determined accurately. In this scenario, the algorithm may treat every individual entity as a separate cluster, effectively failing to recognize the relevant underlying structure of the data. This failure can be characterized as a form of overfitting, which carries potentially detrimental consequences. Overfitting occurs when a model is so closely tailored to specific, potentially noisy, historical observations that it fails to capture the true, general structure of the market. Attempting to fit the model too tightly to these slightly erroneous data points can introduce significant biases and substantially diminish the model’s predictive power.
To effectively eliminate the structural problems caused by the HRP algorithm and to establish the necessary framework for the Multi-Criteria Decision-Making (MCDM) methods, the popular Elbow method has been integrated into the HRP algorithm.
This integration was performed specifically to determine the optimal number of clusters. The first practical step of this enhanced HRP algorithm was to partition the assets—based on the daily closing data of companies traded in the BIST 100 Index—into distinct clusters according to the predefined linkage methods.
Based on the single linkage method, companies listed on the BIST 100 Index are grouped into six clusters, as shown in Fig.
6.

Fig. 6
Companies in clusters based on the single linkage method.

Fig. 7
Companies in clusters based on the complete linkage method.
Based on the complete linkage method, companies listed on the BIST 100 Index are grouped into six clusters, as shown in Fig.
7.
Based on the average linkage method, companies listed on the BIST 100 Index are grouped into six clusters, as shown in Fig.
8.

Fig. 8
Companies in clusters based on the average linkage method.

Fig. 9
Companies in clusters based on the centroid linkage method.
Based on the centroid linkage method, companies listed on the BIST 100 Index are grouped into six clusters, as shown in Fig.
9.
Based on the median linkage method, companies listed on the BIST 100 Index are grouped into seven clusters, as shown in Fig.
10.

Fig. 10
Companies in clusters based on the median linkage method.

Fig. 11
Companies in clusters based on the weighted linkage method.
Based on the weighted linkage method, companies listed on the BIST 100 Index are grouped into seven clusters, as shown in Fig.
11.
Based on the Ward linkage method, companies listed on the BIST 100 Index are grouped into eight clusters, as shown in Fig.
12.

Fig. 12
Companies in clusters based on the Ward linkage method.
4.3 Portfolio Transactions with HRP-MCDM Approach
The distinct cluster structures derived from the enhanced HRP algorithm—integrated with the Elbow method—are presented across the various linkage methods as follows: Fig.
5 illustrates the clusters generated using single linkage, Fig.
6 depicts those obtained via complete linkage, Fig.
7 shows the results for average linkage, Fig.
8 for centroid linkage, Fig.
9 for median linkage, Fig.
11 for weighted linkage, and Fig.
12 for Ward linkage.
Following this robust clustering phase, a separate MCDM procedure was applied to the asset sets produced under each linkage method. Within this hybrid framework, the assignment of criterion weights is a critical step, as it has a direct and substantial influence on the final portfolio outcomes. For this reason, the necessary criterion weights corresponding to each specific cluster were computed independently using the MEREC method, an objective weighting technique.
The MEREC method was selected for several strategic reasons: it is a relatively recent addition to the literature, has only been utilized in a limited number of financial studies, and its characteristics align well with the BIST 100 dataset used in this research. Methodologically, MEREC offers notable advantages: it possesses a solid mathematical foundation, avoids unnecessary computational complexity, operates entirely without subjective input from decision makers, and has been applied in conjunction with the WEDBA method in a small number of prior studies. Crucially, its integration with the HRP algorithm is novel, further validating its use in this research.
Once the criterion weights for every cluster were objectively determined, the financial performance of the firms within each cluster was rigourously assessed using the WEDBA method. WEDBA was chosen because, despite being one of the earlier techniques in the MCDM literature, it remains underutilized in practical financial decision-making applications. It has appeared in only a few finance-related studies—often in combination with MEREC—and, like MEREC, has not been previously employed alongside the HRP algorithm. Additionally, its established computational structure is highly suitable for analysing the characteristics of the specific dataset used in this study.
The final portfolio outcomes produced through the integrated HRP-MCDM framework—using the Ward, single, complete, average, weighted, centroid, and median linkage methods—are concisely summarized in Table
6. In addition, the application steps for the MEREC and WEDBA methods are available for open access at
https://zenodo.org/records/18140067 for the clusters obtained under each linkage criterion.
Table 6
Portfolio results according to HRP-MCDM approach.
|
Average return |
Maximum return |
Minimum return |
Range |
Standard deviation |
Skewness |
Kurtosis |
Sharpe ratio |
| Single |
0.001752 |
0.067922 |
−0.106825 |
0.174746 |
0.015754 |
−1.574484 |
7.912521 |
%11.12 |
| Complete |
0.001654 |
0.070020 |
−0.103507 |
0.173527 |
0.015485 |
−1.513177 |
7.551657 |
%10.68 |
| Average |
0.001602 |
0.067084 |
−0.109381 |
0.176465 |
0.015970 |
−1.516512 |
7.653730 |
%10.03 |
| Centroid |
0.001682 |
0.068849 |
−0.103575 |
0.172424 |
0.015659 |
−1.531529 |
7.662994 |
%10.47 |
| Median |
0.001650 |
0.068843 |
−0.103370 |
0.172213 |
0.015610 |
−1.579865 |
7.985782 |
%10.57 |
| Weighted |
0.001565 |
0.066707 |
−0.102828 |
0.169534 |
0.015265 |
−1.553718 |
7.988663 |
%10.25 |
| Ward |
0.001728 |
0.070793 |
−0.102738 |
0.173531 |
0.015478 |
−1.544393 |
7.992812 |
%11.17 |
| MCDM |
0.001630 |
0.068443 |
−0.103784 |
0.172228 |
0.015673 |
−1.609712 |
8.070063 |
%10.40 |
Table
6 provides the detailed performance outcomes for the portfolios constructed under the hybrid HRP-MCDM framework using the stocks listed in the BIST 100 Index.
A meticulous inspection of these results reveals that the portfolios generated using the single and Ward linkage methods delivered the most robust overall performance, particularly in relation to both mean return and the crucial Sharpe ratio.
When the portfolios are comparatively assessed based on risk (standard deviation), the average-linkage portfolios exhibited the highest recorded volatility. Conversely, the weighted-linkage portfolios successfully displayed the lowest risk levels.
In terms of distribution characteristics, the skewness values consistently suggest that the return distributions across all linkage methods are negatively skewed relative to the standard normal distribution, implying a tendency toward larger negative return events. Concerning kurtosis, the single and Ward linkage portfolios demonstrated the most pronounced “heavy-tail” behaviour (leptokurtosis), while the complete and average linkage portfolios showed the lowest values. These findings suggest that the complete and average linkage methods are associated with a greater likelihood of extreme return fluctuations compared to other methods.
Finally, following the implementation of portfolio optimization using the traditional MEREC and WEDBA MCDM methods alone, a key finding emerged: a comparative analysis of the results in Table
6 indicates that the hybrid HRP-MCDM portfolios generally outperform those created exclusively through traditional, standalone MCDM procedures.
Beyond this general performance comparison, a more nuanced evaluation yields several additional insights into the structural behaviour of the proposed hybrid model. The consistent increase in Sharpe ratios—most prominently observed for the single and Ward linkage strategies—demonstrates that augmenting HRP with objective MCDM-based evaluation effectively reduces the algorithm’s sensitivity to clustering noise and substantially enhances the overall risk–return alignment.
The empirical evidence also suggests that linkage techniques which inherently produce more homogeneous or economically meaningful clusters benefit disproportionately from the MCDM integration. This indicates that cluster quality directly mediates the effectiveness of the hybrid model. Although the HRP-MCDM approach might marginally increase volatility for some specific linkage configurations, this is typically offset by a corresponding increase in average returns, which ultimately strengthens the risk-adjusted performance. This observed pattern confirms that the combination of MEREC-based weighting and WEDBA-based ranking systematically shifts allocations toward financially stronger firms within each HRP-generated cluster.
Overall, the empirical results decisively demonstrate that the hybrid model not only optimizes weight reallocation but also systematically mitigates structural weaknesses inherent in the original HRP algorithm—particularly its tendency to over-allocate to underperforming assets in elongated cluster structures—thereby establishing a more resilient, balanced, and superior portfolio optimization procedure.
5 Conclusions and Recommendations
One of the most complex challenges facing both individual and institutional investors is the identification of securities capable of delivering sustained superior returns. In response to this need, Markowitz (
1952) established the foundation for modern finance with the Mean-Variance Optimization (MVO) model, designed to maximize expected return for a given risk level. Despite its historical significance, the MVO framework has attracted widespread criticism, primarily for its persistent tendency to yield highly concentrated portfolios comprising only a limited selection of assets—a structural flaw that has resisted resolution through conventional methodological advances.
In light of these practical constraints, the field has transitioned toward alternative models. The technological explosion of financial data has strained the scalability of traditional optimization, accelerating the adoption of artificial intelligence–based techniques. Among these, the Hierarchical Risk Parity (HRP) algorithm—rooted in graph theory and machine learning—has emerged as a powerful countermeasure to the concentration problem. However, HRP’s performance superiority over MVO often remains constrained. Therefore, this study advanced the HRP algorithm by integrating MCDM methods—specifically MEREC and WEDBA—to create a novel hybrid portfolio framework capable of offering more comprehensive and robust investment recommendations. Using real market data from the BIST 100 Index, the hybrid construction was carried out in two principal phases.
The analysis first involved portfolio weighting via the HRP algorithm across seven distinct linkage methods (Ward, single, complete, average, weighted, centroid, and median). This was enhanced in the second phase by incorporating the Elbow method into HRP, which successfully identified the optimal number of clusters for refinement (e.g., eight clusters under Ward, six under single, etc.). Following clustering, each configuration was analysed using the MEREC method for objective criterion weighting (based on financial criteria such as current ratio, ROE, and leverage) and the WEDBA method for asset ranking.
A rigourous evaluation of the findings yielded the following critical observations:
Mean Return Dominance: The HRP-MCDM portfolios successfully yielded the highest mean returns under six of the seven linkage methods (Ward, single, complete, average, centroid, and median). The original HRP algorithm proved marginally superior only under the weighted linkage configuration.
Risk and Distribution: The HRP-MCDM approach demonstrated higher volatility (standard deviation) across all linkage types when compared against the baseline HRP. However, the return distributions for both the HRP and HRP-MCDM portfolios were consistently left-skewed (negatively skewed), and both approaches exhibited relatively heavy tails (high kurtosis), indicating a consistent, heightened probability of extreme market movements.
Sharpe Ratio Superiority: The most critical finding emerged from the risk-adjusted comparison: HRP-MCDM significantly outperformed HRP based on the Sharpe ratio under the Ward, single, complete, and median linkage methods. Conversely, HRP produced superior Sharpe ratios under the average, centroid, and weighted linkage methods.
Overall, the empirical evidence overwhelmingly suggests that investors seeking to enhance risk-adjusted portfolio performance and increase capital value should strategically consider adopting the HRP-MCDM approach, particularly when utilizing linkage methods that favour cluster homogeneity (like Ward and single linkage).
There are several directions that future research could take to further refine and broaden the scope of this study. To begin with, both investors and researchers who intend to apply the extended HRP–MCDM framework may obtain more meaningful insights by working with datasets that include a wider range of asset classes and cover different market environments. Expanding the data in this way would offer a clearer view of how the model behaves under varying economic conditions. Another line of inquiry could involve comparing the proposed HRP–MCDM approach with more traditional optimization techniques, such as the classic Mean–Variance Optimization (MVO). Such comparisons would help reveal whether the extended framework provides noticeable improvements in terms of risk management or portfolio stability. It may also be worthwhile to explore hybrid designs in future studies. Integrating alternative distance metrics or incorporating other MCDM methods into the HRP–MCDM structure could make the model more versatile and adaptable to different decision-making settings. Finally, developing a practical coding framework that can automatically retrieve up-to-date market data and present it to users would greatly enhance the applicability of the approach, particularly for investors who rely on timely information.
Disclosure statement No potential conflict of interest was reported by the author(s).
Funding No funding was received for this research.