Abstract
The article focuses on the presentation and comparison of selected heuristic algorithms for solving the inverse problem for the anomalous diffusion model. Considered mathematical model consists of time-space fractional diffusion equation with initial boundary conditions. Those kind of models are used in modelling the phenomena of heat flow in porous materials. In the model, Caputo’s and Riemann-Liouville’s fractional derivatives were used. The inverse problem was based on identifying orders of the derivatives and recreating fractional boundary condition. Taking into consideration the fact that inverse problems of this kind are ill-conditioned, the problem should be considered as hard to solve. Therefore,to solve it, metaheuristic optimization algorithms popular in scientific literature were used and their performance were compared: Group Teaching Optimization Algorithm (GTOA), Equilibrium Optimizer (EO), Grey Wolf Optimizer (GWO), War Strategy Optimizer (WSO), Tuna Swarm Optimization (TSO), Ant Colony Optimization (ACO), Jellyfish Search (JS) and Artificial Bee Colony (ABC). This paper presents computational examples showing effectiveness of considered metaheuristic optimization algorithms in solving inverse problem for anomalous diffusion model.
1 Introduction
Nowadays, when all kinds of computational and simulation methods are becoming more and more important and the processing power of computers is increasing, it is worth developing and looking for new applications of this type of tools. In practice, there are many classical numerical methods that are successfully used, developed and adapted to current problems. On the other hand, in recent years, scientists have been developing artificial intelligence methods, which include metaheuristic optimization algorithms. In this paper, we focused on combining a classical numerical method for solving a differential equation and selected heuristic algorithms for solving the inverse problem.
In the scientific literature, many works on the use of fractional derivatives to model many processes occurring in physics and engineering can be found (Bhangale
et al.,
2023; Brociek
et al.,
2017,
2019; Sowa
et al.,
2023; Bohaienko and Gladky,
2023; Koleva
et al.,
2021). In particular, derivatives of this type are used to model anomalous diffusion. As an example, heat flow in porous materials can be mentioned (Brociek
et al.,
2017,
2019). In article (Brociek
et al.,
2019), the authors model the phenomenon of heat flow in a porous medium. For this purpose, several mathematical models were used, including those based on fractional derivatives. The results from numerical experiments were compared with measurement data. Models that used fractional derivatives proved to be more accurate than the model with integer-order derivatives. Sowa
et al. (
2023) used fractional calculus and presented a voltage regulator extended model. The approach presented in the paper for modelling a voltage regulator has been verified with measurement data, and the non-integer order Caputo derivative proved to be an effective tool. Another application of fractional derivatives in mathematical modelling can be found in the paper (Bohaienko and Gladky,
2023), where a model for predicting the dynamics of moisture transport in fractal-structured soils was presented. The model incorporates the Caputo derivative. The numerical solution was obtained using the Crank-Nicholson finite-difference scheme. More examples and trends in the application of fractional calculus in various scientific fields are available in the publications (Hristov,
2023b; Obembe
et al.,
2017; Ionescu
et al.,
2017). Publications related to numerical methods in the field of fractional calculus are also worth mentioning (Ciesielski and Grodzki,
2024; Hou
et al.,
2023; Arul
et al.,
2022; Hristov,
2023a; Podlubny,
1999; Stanisławski,
2022).
In many engineering problems, there is a need to solve what’s known as an ‘inverse problem’. In simple terms, these issues involve identifying the input parameters of a model (e.g. boundary conditions or material parameters) based on observations (outputs) of the model. Typically, these problems are challenging because they are ill-posed (Aster
et al.,
2013c; Kaipio and Somersalo,
2005). Examples of inverse problems in various applications are included in the articles (Montazeri
et al.,
2022; Brociek
et al.,
2024; Ashurov
et al.,
2023; Wang
et al.,
2023; Ibraheem and Hussein,
2023; Brociek
et al.,
2023; Magacho
et al.,
2023). For example, the article (Brociek
et al.,
2024) focuses on an inverse problem concerning the identification of the aerothermal heating of a reusable launch vehicle based on temperature measurements taken in the thermal protection system (TPS) of this vehicle. The mathematical model of the TPS presented in the paper takes into account the dependence on temperature of the material parameters, as well as the thermal resistances occurring in the contact zones of the layers. To solve the inverse problem, the Levenberg-Marquardt method was applied.
One approach to solving inverse problems is to create a fitness function (or loss function, error function), and then optimize it to find the identified parameters. In this context, metaheuristic optimization algorithms can be very effective. In the paper (Hassan and Tallman,
2021), the authors utilize genetic algorithms, simulated annealing, and particle swarm optimization to solve the piezoresistive inverse problem in self-sensing materials. The considered problem was ill-posed and multi-modal. The results obtained in the study indicate that the genetic algorithm proved to be the most effective. As another example, the article (Al Thobiani
et al.,
2022) addresses an inverse problem for crack identification in two-dimensional structures. The authors utilize the eXtended Finite Element Method (XFEM) associated with the original Grey Wolf Optimization (GWO) and an improved GWO using Particle Swarm Optimization (PSO) (IGWO). The utilization of heuristic optimization algorithms for inverse problems in models with fractional derivatives can be found in papers (Brociek
et al.,
2020; Brociek and Słota,
2015). In both of these articles, the Ant Colony Optimization (ACO) algorithm was applied. The first publication involved a comparison with an iterative method. In the second article, heat flux on the boundary was identified. Additionally, papers (Kalita
et al.,
2023; Alyami
et al.,
2024; Zhang and Chi,
2023) address metaheuristic optimization algorithms and their applications.
In this article, an algorithm for solving the inverse problem for the equation of anomalous diffusion is presented. This equation is a partial differential equation with fractional derivatives. The Caputo derivative was adopted as the derivative with respect to time, while the Riemann-Liouville derivative was utilized for the derivative with respect to space. In the considered inverse problem, the objective was to identify the function appearing in the fractional boundary condition as well as the orders of the derivatives. To achieve this, several metaheuristic optimization algorithms were used and compared.
2 Mathematical Model of Anomalous Diffusion with Fractional Boundary Condition
In this article, we consider a mathematical model of anomalous diffusion with fractional derivatives both in time and space. Models of this type can effectively be used to model mass and heat transport phenomena in porous media (Brociek
et al.,
2019; Sobhani
et al.,
2023; Kukla
et al.,
2022). The model consists of the following fractional-order partial differential equation:
The equation is supplemented with the initial condition:
and boundary conditions:
It is important to note the boundary condition (
4) at the right boundary of the considered domain. It takes the form of a Robin boundary condition with a fractional derivative. In the differential equation (
1), two different fractional-order derivatives were assumed. The derivative of order
$\alpha \in (0,1)$ with respect to time is a Caputo-type derivative, defined by the following formula:
In the case of the derivative with respect to space, a fractional-order derivative of order
$\beta \in (1,2)$ Riemann-Liouville type was applied:
In the model, it is also assumed that
λ is a continuous, positive function called the diffusion coefficient, and the functions
f,
φ, and
ψ are also continuous functions. To effectively model and conduct simulations in such models, the first step is to solve equations (
1)–(
4). This task is known as the direct problem (or forward problem). In the next section, a numerical method to solve the considered equation is described.
3 Numerical Method of Forward Problem
This article primarily focuses on the inverse problem. The presented approach requires optimization of the fitness function. However, in the optimization process, it is necessary to repeatedly solve equations (
1)–(
4), that is, the so-called forward problem. To solve it, an implicit finite difference scheme is applied.
Firstly, the considered domain $\Omega =[0,T]\times [0,L]$ is discretized, resulting in a grid $S=\{({x_{i}},{t_{k}}):\hspace{2.5pt}i=0,1,\dots ,N,\hspace{2.5pt}k=0,1,\dots ,K\}$ with steps $\Delta x=\frac{L}{N}$, ${x_{i}}=i\Delta x$, $\Delta t=\frac{T}{K}$, ${t_{k}}=k\Delta t$. Then, for all functions involved in the model, the values at the grid points S are determined. We use the following notation: ${\lambda _{i}^{k}}=\lambda ({x_{i}},{t_{k}})$, ${f_{i}^{k}}=f({x_{i}},{t_{k}})$, ${\varphi _{i}}=\varphi ({x_{i}})$, ${\psi ^{k}}=\psi ({t_{k}})$. Let ${u_{i}^{k}}=u({x_{i}},{t_{k}})$ denote the values of the exact solution at the points $({x_{i}},{t_{k}})$, and ${U_{i}^{k}}$ represent the corresponding values obtained from the numerical solution.
To derive the implicit finite difference scheme, we need to apply the approximation of the Riemann-Liouville derivative (
6) in the form of the shifted Grünwald formula (Tian
et al.,
2015; Tadjeran and Meerschaert,
2007):
where
${g_{\beta ,j}}=\frac{\Gamma (j-\beta )}{\Gamma (-\beta )\Gamma (j+1)}$. Then, we approximate the Caputo derivative (
5) using the following formula (Lin and Xu,
2007):
The derivative appearing in the boundary condition (
4), after considering the zero condition at the left boundary, is approximated as follows:
Combining all of the above approximations together and after appropriate transformations, we obtain the implicit difference scheme, which can be expressed in matrix form as:
where
${b_{j}}={(j+1)^{1-\alpha }}-{j^{1-\alpha }}$. And the vectors
${U^{k}}$,
${Q^{k}}$,
${F^{k}}$ have the following form:
The matrix
${A^{k}}$, dependent on the time step
k, is of size
$N\times N$. Its coefficients are determined by the following formula:
In the above formula, the symbol
r is defined as
$r=\frac{{(\Delta t)^{\alpha }}\Gamma (2-\alpha )}{{(\Delta x)^{\beta }}}$. Equations (
10)–(
11) define systems of linear equations. Solving these systems will yield the approximate values of the function
u at the grid points
S. Article (Xie and Fang,
2020) includes theorems regarding the stability and convergence of the numerical scheme (
10)–(
12). In the case of the scheme under consideration, the convergence order is
$O({(\Delta t)^{2-\alpha }}+{(\Delta x)^{2}})$.
4 Description of Inverse Problem
In mathematical modelling, as well as in various computer simulations it is essential to use proper mathematical models. In this paper, time-space fractional diffusion equation (TSFDE) with fractional boundary contition is considered. This model was further described in Section
2 and can be used as an effective tool in modelling the heat conduction in porous materials (Brociek
et al.,
2019). To solve model described by (
1)–(
4) it is necessary to know full information about the model input, such as material’s parameters, geometry or model coefficients. In many engineering issues, it is impossible to have the knowledge of all model’s information. It might be because of the lack of measuring equipment, or toughness in choosing the parameters. The usual problem is the choice of such entry model’s parameters – input, that the model’s result – output (e.g. temperature measurements in a chosen control point) takes the proper value. Problems of this sort are named inverse problems and are usually hard to solve (Özişik and Orlande,
2021; Aster
et al.,
2013c,
2013a,
2013b). To put it simply, the problem is the identifying of the input parameters to fit to the measurement data (part of model’s output) as closely as possible.
In this article, in order to solve the inverse problem, an approach is presented which involves optimizing the following fitness function:
Symbols
${a_{0}},{a_{1}},\dots ,{a_{\textit{dim}}}$ are marked as unknown input parameters of the model-parameters, which are to be identified. Objective function
$\mathcal{F}$ is dependent on these parameters. By
$\textit{dim}+1$ the size of optimization task is marked,
W is a number of data in model’s output (e.g. measurement data),
${u_{w}}({a_{0}},{a_{1}},\dots ,{a_{\textit{dim}}})$ $(w=1,2,\dots ,W)$ denotes output values calculated from the model for fixed input parameters
${a_{0}},{a_{1}},\dots ,{a_{\textit{dim}}}$ and by
${\overline{u}_{w}}$ the data output (measurement data) is marked, to which model should fit itself. So, function
$\mathcal{F}$ measures “how close” are the calculated values from a model (for fixed input parameters
${a_{0}},{a_{1}},\dots ,{a_{\textit{dim}}}$) to output values given in a problem (e.g. measurement data). Solving given inverse problem is based on finding the minimum of fitness function relative to unknown parameters
${a_{0}},{a_{1}},\dots ,{a_{\textit{dim}}}$. Hence, the use of selected metaheuristic algorithms for finding the minimum of the fitness function is justified. In Section
6, a computational example is presented and algorithms’ efficiency is discussed. Figure
1 schematically presents the data flow in forward and inverse problems.
Fig. 1
Data flow diagram for forward and inverse problem.
5 Metaheuristics Optimization Algorithms
Heuristic optimization algorithms for searching objective function’s minimum are based on simulating group’s intelligence and communication between the individuals in order to effectively search the space.
They are used for finding points in search space close to the optimal one (global minimum) in terms of fitness function. Very commonly fitness function describes a certain dependence (e.g. approximate solution error), which in an engineering problem should have the lowest possible value (problem of minimization). In contrast to classic mathematical methods, they have small requirements for objective function, which is their biggest advantage. Usually, within heuristic optimization algorithms, two phases of searching can be distinguished: exploration part – searching through possibly the vastest part of space, and exploitation part – looking for good quality solutions in a narrow part of searched space. Algorithms of this sort use different techniques, from simple local searching to advanced evolutionary processes. They use mechanisms preventing the method from getting stuck in a limited area of searched space (falling into a local minimum). These algorithms are independent of a specific problem (they do not depend on the fitness function). Algorithms use knowledge about the problem and/or experience accumulated in a process of searching the domain (agents “communicate” with one another), with quality not decreasing as the iteration increases. This kind of algorithms falls into the category of probabilistic algorithms, meaning that their method of work include some random elements. However, a well-tuned algorithm in a vast majority of cases should be able to provide solutions, which are close to one another.
The biggest problem of heuristic optimization algorithms is tuning them properly, according to the problem to be solved. Metaheuristic optimization algorithms can be divided into four major groups:
-
• Swarm Intelligence (SI) algorithms. Inspiried by the behaviour of a swarm or a group of animals in nature. Examples: Ant Colony Optimization (ACO), Artificial Bee Colony (ACO) or Firefly Algorithm (FA).
-
• Evolutionary Algorithms (EA). Their description comes from natural behaviours occurring in evolutionary biology. Examples: Genetic Algorithm (GA), Differential Evolution (DE).
-
• Physics-based Algorithms (PhA). PhA algoritms base their descirption on physics’ laws. Examples: Gravitational Search Algorithm (GSA), Electromagnetic Field Optimization (EFO).
-
• Human-based algorithms. By simulating some of natural human’s behaviours, researchers proposed a few algorithms for solving optimization problems. Examples: Group Teaching Optimization Algorithm (GTOA), Collective Decision Optimization (CSO).
These algorithms gained interest because of their effectiveness in various optimization engineering problems. Examples of their usefulness include publications (Kalita
et al.,
2023; Alyami
et al.,
2024; Zhang and Chi,
2023; Brociek and Słota,
2015). In this paper, some of metaheuristic optimization algorithms were used and compared, from which, three best (in terms of solving inverse problem for model with fractional derivative) were described in further subsections.
5.1 Group Teaching Optimization
Group Teaching Optimization Algorithm (GTOA) described in this section takes inspiration from the process of group teaching. The goal of the process is to improve knowledge of the group of students. The process of teaching can be realized by different means, through learning with a teacher, exchanging knowledge between the students or self-improvement. Each student acquires knowledge with different efficiency, so it is natural to divide them into two groups: students with normal abilities and outstanding students. The teacher, in order to maximize the result, must use different methods while teaching each group. All of these mechanisms were used as an inspiration in creating Group Teaching Optimization Algorithm (Zhang and Jin,
2020).
For algorithm’s presentation, the following notation is used:
dim – dimension of the task, N – number of students in population, $\mathcal{F}$ – fitness function,
${\mathbf{x}_{i}^{t}}=[{x_{i,1}^{t}},{x_{i,2}^{t}},\dots ,{x_{i,\textit{dim}}^{t}}]$ |
– |
i-th student during iteration t before learning with a teacher, |
${\mathbf{x}_{\textit{teacher}}^{t}}=[{x_{\textit{teacher},1}^{t}},{x_{\textit{teacher},2}^{t}},\dots ,{x_{\textit{teacher},\textit{dim}}^{t}}]$ |
– |
teacher in iteration t, |
${\mathbf{x}_{{\textit{ALT}_{i}}}^{t}}=[{x_{{\textit{ALT}_{i}},1}^{t}},{x_{{\textit{ALT}_{i}},2}^{t}},\dots ,{x_{{\textit{ALT}_{i}},\textit{dim}}^{t}}]$ |
– |
i-th student after learning with teacher during iteration t, |
${\mathbf{x}_{{\textit{ALS}_{i}}}^{t}}=[{x_{{\textit{ALS}_{i}},1}^{t}},{x_{{\textit{ALS}_{i}},2}^{t}},\dots ,{x_{{\textit{ALS}_{i}},\textit{dim}}^{t}}]$ |
– |
i-th student after learning with other students during iteration t, |
${\mathbf{x}_{\textit{AVG}}^{t}}=[{x_{\textit{AVG},1}^{t}},{x_{\textit{AVG},2}^{t}},\dots ,{x_{\textit{AVG},\textit{dim}}^{t}}]$ |
– |
average students’ knowledge within the population during iteration t, |
${\mathbf{x}_{\textit{best}}^{t}}=[{x_{\textit{best},1}^{t}},{x_{\textit{best},2}^{t}},\dots ,{x_{\textit{best},\textit{dim}}^{t}}]$ |
– |
the best student in iteration t, |
${\mathbf{x}_{\textit{second}}^{t}}=[{x_{\textit{second},1}^{t}},{x_{\textit{second},2}^{t}},\dots ,{x_{\textit{second},\textit{dim}}^{t}}]$ |
– |
the second best student in iterationt, |
${\mathbf{x}_{\textit{third}}^{t}}=[{x_{\textit{third},1}^{t}},{x_{\textit{third},2}^{t}},\dots ,{x_{\textit{third},\textit{dim}}^{t}}]$ |
– |
the third best student in iterationt. |
To generalize, in a process of a group teaching, a few steps can be distinguished. Below, these steps are introduced along with their mathematical description, which make up a model of algorithm’s operation.
-
• Step 1 –
Choosing the teacher. During this step, a so-called teacher is chosen from the whole population. The evaluation of students in a population is determined by their fitness value – the smaller the value is, the better the quality (knowledge) of the student. The process of choosing the teacher is done on the basis of the following equation:
-
• Step 2 – Division of the students. During this step, all individuals within the population are divided into two, equally populated groups based on their knowledge (quality of their result). The results of this division are two groups, outstanding and normal groups of students.
-
• Step 3 – Learning with a teacher. In case of learning with a teacher, the process differs for both of the groups created in the previous step. The teacher uses different methods for different students groups. Mathematically, this process can be described with following equations:
for students in the outstanding group:
for students in the normal group:
In equations (
15), (
16), symbols
a,
b,
c,
d were used to represent random numbers within the scope
$[0,1]$, symbol
f represents the so-called teaching factor. In this case,
$f=1$.
Fig. 2
Diagram of next steps of GTOA.
If student’s knowledge increased, after the learning with a teacher, then the student goes to the next step. Otherwise, to the next step goes an individual from before learning with a teacher, that is:
-
• Step 4 –
self-learning of students. This step simulates students learning together during their free time. Students can share knowledge between one another and learn together, they can learn by themselves as well. The mathematical description of this process is as follows:
Symbols
e,
g were used to represent random number from
$[0,1]$. Index
$j\ne i$ appearing in the equation (
18) represents a random student. Hence, the interaction between students
i and
j. Those students, who increased their knowledge after this step, pass to the next population (denoted by
$t+1$), meaning:
Figure
2 schematically depicts the next steps of GTOA algoritm, while Fig.
3 presents the block diagram of the algorithm. Pseudocode
1 presents the next steps of GTOA.
Fig. 3
Block diagram of GTOA.
Algorithm 1
Pseudocode of GTOA
5.2 Artificial Bee Colony
Artificial Bee Colony (ABC) is one of many metaheuristic algorithms based on animals’ behaviour in their natural environment. In order to find food sources, the algorithm divides bees into two groups:
Working bees – bees that at the moment are scavenging through the already found food sources. For those bees, important factors are the distance between the source and the hive and the amount of nectar in the food source.
Unclassified bees – those bees’ mission is to search for new food sources. They can be further divided into two groups: observers and scouts. Scouts look for new food sources randomly, while observers plan their search based on the information they’re provided with.
Bees exchange information by performing a special dance. Observer bees decide how to search the space based on the dance of other bees. After the collection of nectar, every bee can decide, whether they should share information about the food source they’ve been exploring, keep on exploring it without the information exchange with other bees or discard the food source and become an observer. In order to present the ABC algorithm, a following notation has been used:
dim – dimension of the task,
N – number of bees in a colony = number of food sources,
$\mathcal{F}$ – fitness function,
The process of an ABC algorithm can be generalized into a few steps. Below, these steps are presented together with their corresponding mathematical descriptions.
-
• Step 1 –
Dance. At this point scouting bees return to the hive and begin to share information about the food source they’ve been exploring. Based on the information provided by the scouts, every source is evaluated and assigned a probability according to its quality in comparison with other food sources. It is depicted by an equation below:
where
-
• Step 2 –
Leaving the hive. After the dance ends, every observer chooses one of the food sources and sets out to explore it (one source can be chosen multiple times), the source is being modified and if the modified source is better than the original one, it replaces the original one. The formula used to modify the food source is the following:
In the equation (
22),
${\Phi _{i}}$ is a pseudo-random number between 0 and 1,
k is a randomly selected index different than
i. If the fitness value of modified food source is better than the one’s before exploration, the modified one replaces the original as a food source, otherwise it is discarded.
-
• Step 3 –
Abandoning old food sources. All of the sources which haven’t been chosen by any of the bees are being discarded and replaced with a new one, that is created using the following formula:
where
${\omega _{i}}$ is a pseudo-random number within the range
$[0,1]$ and max, min represent the upper and lower limits of the search area, respectively.
More details about the algorithm itself and its exemplary applications are presented in Li
et al. (
2012), Karaboga and Basturk (
2007), Roeva (
2018).
Fig. 4
Block diagram of ABC algorithm.
Algorithm 2
Pseudocode of ABC
5.3 Jellyfish Search
This algorithm was inspired by the behaviour of jellyfish in the ocean. It simulates factors such as following ocean currents, passive and active movements inside jellyfish swarm, time control mechanism which governs the switching between types of movement and convergence into jellyfish bloom.
Ocean Current
The direction of the ocean current is obtained by averaging all the vectors from each jellyfish in the ocean to the jellyfish that is currently in the best location:
where
${n_{\textit{Pop}}}$ is the number of jellyfish;
${X^{\ast }}$ is the jellyfish currently with the best location in the swarm;
${e_{c}}$ is the factor that governs the attraction;
μ is the mean location of all jellyfish. Because we assume that there is a normal spatial distribution of jellyfish in all dimensions, the previous equation can be transformed in the following way:
where
$\beta \gt 0$ is a distribution coefficient, related to the length of
$\overrightarrow{\textit{trend}}$. Based on the results of sensitivity analysis in numerical experiments carried out by authors of this algorithm,
$\beta =3$ is obtained.
Finally, the new location of each jellyfish is given by:
Movement Inside Swarm
After the formation of the swarm, most jellyfish exhibit type A motion. As time goes on, more and more jellyfish begin to exhibit type B motion.
Time Control Mechanism
To regulate the movement of jellyfish between following the ocean current and moving inside the jellyfish swarm, the time control mechanism is introduced. It includes a time control function
$c(t)$ which is a random value that fluctuates from 0 to 1 over time.
where
t is the time specified as the iteration number and
${\textit{Max}_{\textit{iter}}}$ is the maximum number of iterations, which is an initialized parameter.
To decide which type of movement to use inside a swarm, the function $(1-c(t))$ is used. When its value is less than $\textit{rand}(0,1)$, the jellyfish exhibits type A motion. Otherwise, the jellyfish exhibits type B motion.
Population Initialization
In order to get initial population which is more diverse and has a lower probability of premature convergence than the one with random positions, the logistic map has been used.
${X_{i}}$ is the logistic chaotic value of location of the
ith jellyfish;
${X_{0}}$ is used for generating initial population of jellyfish,
${X_{0}}\in (0,1)$,
${X_{0}}\notin \{0.0,\hspace{2.5pt}0.25,\hspace{2.5pt}0.75,\hspace{2.5pt}0.5,\hspace{2.5pt}1.0\}$, and parameter
η is set to 4.0.
Boundary Conditions
Oceans are located around the world. The earth is approximately spherical, so when a jellyfish moves outside the bounded search area, it will return to the opposite bound.
${X_{i,d}}$ is the location of the
ith jellyfish in
dth dimension;
${{X^{\prime }}_{i,d}}$ is the updated location after checking boundary constraints.
${U_{b,d}}$ and
${L_{b,d}}$ are upper and lower bounds of
dth dimension in search spaces, respectively.
More details about the JS and its exemplary applications are presented in the publications (Chou and Truong,
2021; Bujok,
2021; Youssef
et al.,
2021).
Fig. 5
Block diagram of JS.
Algorithm 3
Pseudocode of JS
6 Computational Example
This section is designed to present how the presented method of solving an inverse problem for the model of anomalous diffusion works. As an example, the results provided by a few chosen metaheuristic optimization algorithms were compared.
In a considered inverse problem, fractional boundary condition (
4) on the right side is recreated, specifically, function
ψ appearing in mentioned condition, as well as orders of fractional derivatives
α,
β. In the model (
1)–(
4), the following numeric data was assumed:
The objective of this example is to test proposed algorithm (treated as a benchmark), hence the data sought – orders of derivatives
α,
β and function
ψ are known and have the following values:
Upon writing function
ψ in a numerical form, the following was obtained:
So the sought function
ψ and orders of derivatives
α,
β were identified in a following form:
where parameters
${a_{0}}$,
${a_{1}}$,
${a_{2}}$,
${a_{3}}$,
${a_{4}}$,
${a_{5}}$,
${a_{6}}$ are unknown. In a Table
1, the domain of each parameter, as well as it’s exact value are presented.
Table 1
Reference values of sought parameters with the search domain.
Parameter |
Reference value |
Domain |
$\alpha ={a_{0}}$ |
0.6 |
$[0,1]$ |
$\beta ={a_{1}}$ |
1.5 |
$[1,2]$ |
${a_{2}}$ |
3.00901 |
$[1,5]$ |
${a_{3}}$ |
$-53.1736$ |
$[-70,-20]$ |
${a_{4}}$ |
339.514 |
$[250,450]$ |
${a_{5}}$ |
$-20$ |
$[-30,-10]$ |
${a_{6}}$ |
150 |
$[50,250]$ |
Data necessary for solving the inverse problem is value of the function
u in control point
${x_{p}}=1$ (the right boundary). This data was generated as a result of solving the direct problem for the measurement grid
$400\times 400$. During the algorithm’s work on solving the inverse problem, grid of the size
$100\times 200$ was used. Using different sizes of grid is for evading the phenomena known as inverse crime (Kaipio and Somersalo,
2005). To find the minimum of fitness function (
13), which describes the error of approximated solution, selected metaheuristic optimization algorithms were used. Tested group of algorithms includes described in the Section
5 Group Teaching Optimization Algorithm (GTOA), Artificial Bee Colony (ABC) and Jellyfish Search (JS). Apart from previously mentioned, following heuristics were used: Equilibrium Optimizer (EO), Grey Wolf Optimizer (GWO), War Strategy Optimization (WSO), Tuna Swarm Optimization (TSO) and Ant Colony Optimization (ACO). In each algorithm, values of parameters such as number of iterations and size of the population were chosen in a way that ensures similar number of calls of fitness function. This is to compare the results obtained from these algorithms.
Obtained results of the search of the minimum of fitness function are presented in a Table
2. Colour blue was used to mark three algorithms (GTOA, ABC and JS), which returned a satisfying result. Colour red was used to mark the rest of the algorithms, which did not handle the task so well. Smallest value of objective function was achieved for GTOA and was approximately 0.007. Also, in this case, errors of identifying parameters
${a_{0}},{a_{1}},\dots ,{a_{6}}$ are the smallest. Similarly for ABC and JS, obtained errors and value of objective function are on acceptable level. It is also worth to pay attention to the results from ACO, where value of fitness function is low. However, the errors in identification of parameters are on an unacceptable level, which proves that the algorithm got stuck in a local minimum. For other algorithms,a the values of objective function are on a relatively highly level.
Table 2
Results of parameters identification, ${\overline{a}_{i}}$ – identified value of ${a_{i}}$ coefficient, ${\Delta _{{\overline{a}_{i}}}}$ – absolute error of identification, ${\delta _{{\overline{a}_{i}}}}$ – relative error of identification, $\mathcal{F}$ – value of fitness function.
Another important indicator is fitness of data from control point (so called measurement data) to data obtained from model. On one hand, the measure determining this fit is the value of the objective function
$\mathcal{F}$, while on the other hand, it is the errors of the reconstructed function
u for the identified parameters. Figures
6 (for GTOA and ABC) and
7 (for other algorithms) present errors of distribution of the recreated function
u in a control point. For the two best algorithms (GTOA and ABC), these errors are much smaller (
$\approx {10^{-2}}$) than for the rest of the algorithms (
$\approx {10^{1}}$) (see Figs.
6,
7).
Fig. 6
Errors distribution in a control point for GTOA and ABC algorithms.
Fig. 7
Error distribution in a control point for EO, WSO, TSO, GWO, JS, ACO algorithms.
Because the exact solution is known, we decided to calculate the errors of reconstruction function
u in a full domain (not only in control point). In Fig.
8 errors of reconstruction function
u for identified model parameters are presented. For three best algorithms, these errors are on an acceptable level and they share similar characteristics. It is the case of GTOA (maximum error
$\approx 1.5$), ABC (maximum error
$\approx 10$) and JS (maximum error
$\approx 20$). In the rest of algorithms, maximum errors are high and unacceptable.
Fig. 8
Function u’s reconstruction’s error in a full domain.
As part of the conducted computations, a comparison of the error of identification of the function
ψ occurring in the boundary condition was also performed. These errors were calculated using the formula:
where
ψ is the exact solution, and
${\psi _{\text{approx}}}$ is the approximate solution. The corresponding results are presented in Table
3. For the two best algorithms, the percentage errors are
$3.21\% $ for GTOA and
$6.82\% $ for ABC. The JS algorithm ranked third with an error of
$16.21\% $, while for the remaining algorithms, the errors were large
$\approx 35\% $.
Table 3
Absolute and relative errors of function reconstruction ψ.
Algorithm |
Absolute error |
Relative error [%] |
GTOA |
1864.98 |
3.21 |
ABC |
3966.77 |
6.82 |
EO |
20147.81 |
34.67 |
WSO |
19815.22 |
34.09 |
TSO |
20319.82 |
34.96 |
JS |
9424.24 |
16.21 |
GWO |
20478.45 |
35.23 |
ACO |
20407.33 |
35.11 |
7 Conclusion
This article focuses on the method of solving the inverse problem for diffusion equation with fractional derivatives. In the considered task, the orders of derivatives and the function occurring in the boundary condition were identified. In the presented approach, the forward problem was solved using an implicit finite difference scheme, while the inverse problem was solved using heuristic optimization algorithms. The inverse problem turned out to be difficult to solve and required identification of seven parameters.
To solve the inverse problem, the following metaheuristic algorithms were compared: GTOA, ABC, ACO, EO, GWO, JS, TSO, WSO. Satisfactory results were obtained for the Group Teaching Optimization Algorithm (GTOA) and the Artificial Bee Colony (ABC) method. Additionally, the Jellyfish Search (JS) algorithm yielded an acceptable result. The remaining algorithms proved unsuitable for this type of problem.
One of the conclusions from the conducted research and next step in the research is the possibility to build a hybrid algorithm. Firstly, a heuristic algorithm could be used for initial solution localization (exploration part), while deterministic methods such as Nelder-Mead or Hooke-Jeeves could be employed for more focused searches (eploitation part). This is the next step and a plan for future research in the work carried out by the authors of this paper.