## 1 Introduction

*et al.*, 2008). Nevertheless, some of the parameters in the AdEx model lack an experimental (measurable) counterpart, and finding an appropriate set of parameters becomes a challenging problem (Barranca

*et al.*, 2014; Hanuschkin

*et al.*, 2010; Venkadesh

*et al.*, 2018). Fitting mathematical neuron models to real electrophysiological behaviour can be considered a suitable optimization problem that remains partially unsolved.

*et al.*, 2009; Masoli

*et al.*, 2017). Besides, previous findings suggest that the theta-frequency oscillatory activity (around 4–10 Hz in rodents) contributes to signal integration in the GrL. Indeed, the spiking resonance (as enhanced bursting activity) at the theta-frequency band of single cerebellar GrCs in response to low-frequency sinusoidal stimulation has been proposed to strengthen information transmission in the GrL (D’Angelo

*et al.*, 2009, 2001; Gandolfi

*et al.*, 2013). However, the functional role of resonance at the theta band in the information processing of cerebellar GrCs remains elusive.

*et al.*(2020) proposed a methodology for building computationally efficient neuron models of cerebellar GrCs that replicate some inherent properties of the biological cell. Since the cerebellar GrCs show compact and simple morphology (D’Angelo

*et al.*, 2001; Delvendahl

*et al.*, 2015), it is appropriate to consider a mono-compartment model. Thus, the cerebellar GrC was modelled with the AdEx neuron model because of its computational efficiency and realistic firing modes. This fact has been supported by several comparisons with detailed models and experimental recordings (Brette and Gerstner, 2005; Nair

*et al.*, 2014; Naud

*et al.*, 2008). As part of their method, Marín

*et al.*(2020) tuned the parameters of the AdEx model to fit the neuronal spiking dynamics of real recordings. In this context, the authors model the tuning procedure as an optimization problem and study different objective functions to conduct the optimization. These functions combine the accumulated difference between the set of

*in vitro*measurements and the spiking output of the neuron model under tuning. However, their evaluation involves launching computer simulations with uncertainty and nonlinear equations. For this reason, the authors proposed a derivative-free, black-box or direct optimization approach (Price

*et al.*, 2006; Storn and Price, 1997). Namely, they successfully implemented a standard genetic algorithm (Boussaïd

*et al.*, 2013; Cruz

*et al.*, 2018; Lindfield and Penny, 2017) to face the parametric optimization problem.

*et al.*is sound. Genetic algorithms are widely-used and effective meta-heuristics, i.e. generic problem resolution strategies (Boussaïd

*et al.*, 2013; Lindfield and Penny, 2017). Nevertheless, the optimization part of the referred work has been tangentially addressed, and this paper aims to assess the suitability of some alternative meta-heuristics that perform well in other fields. More precisely, this paper compares four more meta-heuristics in the context of the reference work. One of them is Differential Evolution (DE) (Storn and Price, 1997), which is arguably one of the most used methods for parametric optimization in Engineering. Another option is the Teaching-Learning-Based optimizer (TLBO) (Rao

*et al.*, 2012), which features configuration simplicity, low computational cost and high convergence rates. A third option is the combination of a simple yet effective local optimizer, the Single-Agent Stochastic Search (SASS) method (Cruz

*et al.*, 2018) by Solis and Wets (1981), with a generic multi-start component (Redondo

*et al.*, 2013; Salhi, 2017). This compound optimizer will be referred to as MSASS. The last method, which will be called MemeGA, is an ad-hoc memetic algorithm (Cruz

*et al.*, 2018; Marić

*et al.*, 2014) that results from replacing the mutation part of the genetic method by Marín

*et al.*(2020) by the referred local search procedure, SASS.

## 2 Neuron Model

### 2.1 Model Structure and Problem Definition

*w*):

*V*varies with the injection of current, $I(t)$. When

*V*exceeds the threshold potential (${V_{T}}$), the slope factor (${\Delta _{T}}$) models the action potential. This depolarization persists until

*V*reaches the reset threshold potential (${V_{peak}}$), which defines the reset condition aforementioned. At that point,

*V*resets to ${V_{r}}$, and

*w*increases the fixed amount

*b*. The first term models the passive mechanisms of the membrane potential that depend on the total leak conductance (${g_{L}}$), the leak reversal potential (${E_{L}}$) and the membrane capacitance (${C_{m}}$), which regulate the integrative properties of the neuron. The second exponential term models the spike generation and shape, whose dynamics depend on ${\Delta _{T}}$ and ${V_{T}}$ (Naud

*et al.*, 2008).

*w*. It depends on the adaptation time constant parameter (${\tau _{w}}$), the sub-threshold adaptation (

*a*), and the spike-triggered adaptation parameter (

*b*).

##### Table 1

Parameter | Boundaries | Parameter | Boundaries |

${C_{m}}$ (pF) | [0.1, 5.0] | ${V_{T}}$ (mV) | [−60, −20] |

${\Delta _{T}}$ (mV) | [1, 1000] | a (nS) | [−1, 1] |

${E_{L}}$ (mV) | [−80, −40] | b (pA) | [−1, 1] |

${V_{r}}$ (mV) | [−80, −40] | ${g_{L}}$ (nS) | [0.001, 10] |

${V_{peak}}$ (mV) | [−20, 20] | ${\tau _{w}}$ (ms) | [1, 1000] |

*f*be the function that models this computation. It is defined in (2) as an abstract function that depends on the ten model parameters included in Table 1. This configuration is that tagged as $\textit{FF}4$ in Marín

*et al.*(2020), and it is further explained below.

##### (2)

\[\begin{aligned}{}f({C_{m}},\dots ,{\tau _{w}})& =\sum \limits_{i\in [\textit{MF},\textit{LF}]}\big(|f{\textit{eat}_{i}}-{\exp _{i}}|\cdot {w_{i}}\big)\\ {} & \hspace{1em}+{\sum \limits_{j=1}^{N}}\big(|\overline{{\textit{BF}_{{\textit{sim}_{j}}}}}-\overline{{\textit{BF}_{{\exp _{j}}}}}|\cdot {W_{\textit{BF}}}\cdot \big(\textit{std}({\textit{BF}_{{\textit{sim}_{j}}}})+1\big)\big).\end{aligned}\]*f*implies a neuron simulation procedure gathering the output of the behaviour of interest. In order to obtain a neuron model that reproduces the properties of the cerebellar GrCs, the following features were integrated into

*f*: i) repetitive spike discharge in response to continuous direct stimulation (measured as the mean frequency of spike traces) ($\textit{MF}$), ii) first-spike latency under direct current stimulation (measured as the time to the first spike) ($\textit{LF}$), iii) spiking resonance in the theta range under sinusoidal current stimulation (measured as the average burst frequency with different oscillation frequencies) ($\textit{BF}$).

*f*, where the score of each feature is the accumulative absolute difference between the model output of the

*i*feature ($f{\textit{eat}_{i}}$) under different step-current amplitudes, and its experimental equivalent (${\exp _{i}}$), for $i\in \{\textit{MF},\textit{LF}\}$. In order to integrate several components in this function, this term is multiplied by a weighting factor (${w_{i}}$) as described in the next section.

*f*. This term accumulates the summation of each separate absolute difference for $j=1,\dots ,N$ sinusoidal stimulation frequencies. The score for each one results from the absolute difference between: i) the average burst frequency of the simulated neuron after a certain number of oscillatory cycles, $\overline{B{F_{{\textit{sim}_{j}}}}}$, and ii) the experimental value, $\overline{B{F_{{\exp _{j}}}}}$, at that stimulation frequency,

*j*. This term is multiplied by the weight of this particular feature, ${w_{\textit{BF}}}$ (as detailed in the section below). It is also multiplied by the standard deviation of the simulated neuron burst frequency ($\textit{std}({\textit{BF}_{{\textit{sim}_{j}}}})$) plus one. This additional multiplicative term promotes configurations both close to the target and stable.

*f*is for a given set of parameters, the more appropriate it is for replicating the desired neuronal behaviour. It is hence possible to formally define the target optimization problem according to (3). The constraints correspond to the valid range of every parameter, which is generally defined by the max and min superscripts linked to the parameter symbol referring to the upper and lower bounds, respectively. The numerical values considered are those shown in Table 1.

##### (3)

\[ \begin{array}{r@{\hskip0pt}l@{\hskip10pt}r@{\hskip0pt}l}& \displaystyle \underset{{C_{m}},\dots ,{\tau _{w}}}{\text{minimize}}& & \displaystyle f({C_{m}},\dots ,{\tau _{w}})\\ {} & \displaystyle \text{subject to}& & \displaystyle {C_{m}^{\min }}\leqslant {C_{m}}\leqslant {C_{m}^{\max }}\\ {} & & & \displaystyle \dots \\ {} & & & \displaystyle {\tau _{w}^{\min }}\leqslant {\tau _{w}}\leqslant {\tau _{w}^{\max }}.\end{array}\]### 2.2 Model Context and Feature Measurement

*et al.*(2020), the initial membrane potential starts with the same value as the leak reversal potential, i.e. ${V_{\textit{init}}}={E_{L}}$. The burst and mean frequency have been weighted by 1 because they are measured in hertz and are in comparable ranges. In contrast to them, since it is measured in seconds, the weight of the first spike feature was set to 1000. Consequently, for instance, an error of 1 Hz at burst frequency is weighted as an error of 1 Hz at repetitive spike discharge and a lag of 1 ms at latency to the first spike.

*et al.*, 2001).

*et al.*, 2001). According to recent literature (Masoli

*et al.*, 2017), the fast repetitive discharge in the GrCs has been characterized based on the mean frequency and the latency to the first spike in response to three different step-current injections (10, 16 and 22 pA) of stimulation of 1 s.

## 3 Optimization Methods

*et al.*, 2020), ii) a memetic optimizer based on it (MemeGA), iii) Differential Evolution (DE), iv) Teaching-Learning-Based Optimization (TLBO) (Rao

*et al.*, 2012), and v) Multi-Start Single-Agent Stochastic Search (MSASS). The first four cover the two main groups of population-based meta-heuristics (Boussaïd

*et al.*, 2013): Evolutionary Computation and Swarm Intelligence. The optimizers in the first group are inspired by the Darwinian theory, while those in the second rely on simulating social interaction. Namely, GA, MemeGA, and DE belong to the first class, and TLBO can be classified into the second one. Regarding MSASS, it is a single-solution-based meta-heuristic (Boussaïd

*et al.*, 2013) that iteratively applies a local search process to independent random starts.

### 3.1 Genetic Algorithm (GA)

*et al.*, 2013; Salhi, 2017). Roughly speaking, genetic algorithms work with a pool of candidate solutions. Although they are randomly generated at first, the solutions are ultimately treated as a population of biological individuals that evolve through sexual reproduction (involving crossover and mutation). Based on these ideas, genetic algorithms define a general framework in which there are different options for every step and are used in a wide range of problems (Boussaïd

*et al.*, 2013; Salhi, 2017; Shopova and Vaklieva-Bancheva, 2006). The problem addressed in this work has also been previously faced with a genetic algorithm in Marín

*et al.*(2020). It will be referred to as GA, and it is described next as one of the methods compared.

*i*component corresponds to the

*i*optimization variable. Random generation in each dimension follows a uniform distribution between the boundaries indicated in Table 1. After generating the initial individuals, they must be evaluated according to the objective function and linked to their resulting fitness. As the problem at hand is a minimization one, the lower the fitness is, the better it is. The population size remains constant during the search even though individuals change due to evolution.

**Selection:**

**Crossover:**

**Mutation:**

### 3.2 Memetic Algorithm Derived from GA (MemeGA)

*et al.*, 2018; Marić

*et al.*, 2014) are an extension of standard genetic methods formalized by Moscato (1989). Their name comes from the term ‘meme’, defined by Dawkins (1976) as an autonomous and cultural entity similar to biological genes. However, while the individuals of genetic algorithms are mainly passive, those of memetic methods can be seen as active agents that improve their aptitude autonomously. In practical terms, this is achieved by adding the use of complementary local search techniques to the underlying process of Darwinian evolution. This approach, which is useful to avoid premature convergence while improving the exploitation of the search space, is popular in many different fields. For instance, in Marić

*et al.*(2014), the authors design an effective and efficient parallel memetic algorithm for facility location, an NP-hard combinatorial optimization problem. A memetic algorithm is also the best-performing method for parametric heliostat field design in the study published in Cruz

*et al.*(2018). Similarly, the optimization engine of the recent framework for drug discovery extended in Puertas-Martín

*et al.*(2020) is a memetic method.

*et al.*(2020) for the problem at hand. The algorithm maintains the same structure as GA with the only exception of the mutation stage, which is replaced by the use of a local search algorithm. Namely, the original mutation probability is treated as the percentage of individuals to be randomly selected for the local search to improve them at each cycle. This approach is aligned with the proposal in Marić

*et al.*(2014) since the local search is applied to a random subset of the population rather than to all of them, which increases diversity. As introduced, the local optimizer used is SASS, by Solis and Wets (1981), which is also the local method included in the memetic algorithms applied in Cruz

*et al.*(2018), Puertas-Martín

*et al.*(2020). Thus, every individual is a potential starting point for an independent execution of SASS.

*x*. At the beginning of every iteration, SASS generates a new point, ${x^{\prime }}$, according to (4). The term

*ξ*is a random perturbation vector in which every component

*i*(there is one for each decision variable) follows a Gaussian distribution with component-specific mean ${b_{i}}$ and common standard deviation

*σ*(assuming a normalized search space), i.e. ${\xi _{i}}=\mathcal{N}({b_{i}},\sigma )$. Both, the

*b*vector (also known as the bias) and

*σ*, will be varied during the search. However,

*b*starts as a zero vector, and

*σ*is a user-given parameter.

*x*to ${x^{\prime }}$, and the iteration is considered successful. The

*b*vector is then recomputed as $b=0.2b+0.4\xi $ for the next iteration. Otherwise, SASS explores the opposite direction by computing an alternative new point, ${x^{\prime\prime }}=x-\xi $. Again, if the evaluation of ${x^{\prime\prime }}$ returns a better value for the objective function, SASS moves from

*x*to ${x^{\prime\prime }}$, and the iteration is also considered successful. Under this circumstance,

*b*is updated as $b=b-0.4\xi $. However, if neither ${x^{\prime }}$ nor ${x^{\prime\prime }}$ were better than

*x*, the iteration is supposed to be failed, and

*b*is recomputed as $b=0.5b$.

*Scnt*,

*σ*is expanded by a factor $ex$, which is also user-defined and supposed to be greater than 1, i.e. $\sigma =ex\cdot \sigma $. Analogously, if the number of consecutive failed iterations reaches a user-given parameter, $F\textit{cnt}$,

*σ*is contracted by a factor

*c*, which is also user-defined in $(0,1)\in \mathbb{R}$, i.e. $\sigma =c\cdot \sigma $. However, notice that

*σ*is also bounded by the user, and if it goes out from the valid range,

*σ*is automatically reset to its upper bound.

### 3.3 Differential Evolution (DE)

*et al.*, 2019; Price

*et al.*, 2006). It maintains a user-defined number (

*NP*) of randomly-generated candidate solutions (individuals) and progressively alters them to find better ones. Like GA, every individual is a vector with a valid value for each optimization variable. The workflow of this method does not try to imitate aspects such as the selection of progenitors and sexual reproduction as closely as standard genetic methods like GA. However, the terminology of DE also comes from traditional Genetic Algorithms, so each iteration applies mutation, crossover, and selection stages to every individual.

*F*, which controls the amplification of the differential variation, it is a user-given real and constant factor in the range $[0,2]$ in the traditional method. However, it can be randomly redefined in the range $[0.5,1]$ either for each iteration or for every mutant vector during the search. This approach is known as ‘dither’ and improves the rate of convergence with noisy objective functions. Finally, notice that the term ${S^{{r_{2}}}}-{S^{{r_{3}}}}$ defines a single difference vector between candidate solutions, but it is possible to use more than one. The most popular alternative uses two instead, which results in ${S^{{r_{2}}}}-{S^{{r_{3}}}}+{S^{{r_{4}}}}-{S^{{r_{5}}}}$ (assuming that ${r_{4}}$ and ${r_{5}}$ are two random population indices more).

*i*component of the trial vector. According to it, ${S_{i}^{{j^{\prime }}}}$ can come from either the current individual ${S^{j}}$ or its mutant vector. The component selection is controlled by the user-given crossover probability, $\textit{CR}\in [0,1]$, which is linked to uniform random number generation between 0 and 1. Regarding

*k*, it is the index of one of the components defining the individuals, i.e. one of the optimization variables. This index is randomly selected for each computation of (6) to ensure that at least one of the components of the trial vector comes from the mutant one.

*et al.*(2011) to define a variant of DE aimed at mechanism synthesis. However, the addition is not limited to that field but is of general-purpose in reality. It entails an in-breadth search component that increases variability and is similar to the traditional mutation phase of genetic algorithms such as GA. This addition aims to help the algorithm to escape from local optima, especially when working with small populations. In practical terms, it follows (7) to make more changes to each trial vector. According to it, each

*i*component of the input trial vector can be randomly redefined around its current value with a user-given probability. The origin of the components, i.e. the current individual or its associated mutant vector, is not relevant. Therefore, this stage only adds two expected parameters: the per-component mutation probability, $\textit{MP}$, and the modification range,

*range*.

### 3.4 Teaching-Learning-Based Optimization (TLBO)

*et al.*(2012). As a population-based meta-heuristic, this algorithm also works with a set of randomly-generated candidate solutions. However, instead of representing a group of individuals that evolve through sexual reproduction like the previous method, TLBO treats the set of solutions as a group of students. The algorithm simulates their academic interaction, considering a teacher and plain pupils, to find the global optimum of the problem at hand. This optimizer has attracted the attention of many researchers in different fields, from Engineering to Economics, because of its simplicity, effectiveness and minimalist set of parameters (Cruz

*et al.*, 2017; Rao, 2016).

*T*. After having identified that reference solution, TLBO computes the vector

*M*. Every

*i*component in

*M*results from averaging those of the individuals in the current population. This information serves to create an altered or shifted solution, ${S^{\prime }}$, from every existing one,

*S*, according to (8). The computation is defined in terms of every component or optimized variable,

*i*. ${r_{i}}$ is a random real number in range [0, 1] and linked to component

*i*. Similarly, ${T_{F}}$, known as ‘teaching factor’, is a random integer that can be either 1 or 2. Both, ${r_{i}}$ and ${T_{F}}$, are globally computed for the current step. Finally, every ${S^{\prime }}$ is evaluated and replaces

*S*if it obtains a better value from the objective function (${S^{\prime }}$ is discarded otherwise).

*S*, with any other different one,

*W*. The goal is to generate a modified individual, ${S^{\prime }}$, which will replace

*S*if it is a better solution according to the objective function. Every

*i*component of ${S^{\prime }}$ results from (9), where ${r_{i}}$ is a random number in range [0, 1] linked to component

*i*and globally computed for the current step. According to it, the movement direction in every pair goes from the worst solution to the best.

### 3.5 Multi-Start Single-Agent Stochastic Search (MSASS)

*et al.*, 2013; Salhi, 2017). The former is a simple yet effective local search method that will always try to move from a starting point to a nearby better solution. The latter is in charge of randomly generating different initial points, provided that the computational effort remains acceptable.

## 4 Experimentation and Results

### 4.1 Environment and Configuration

*et al.*(2020). Consequently, the GrC model is simulated with the neural simulation tool NEST (Plesser

*et al.*, 2015), in its version 2.14.0, and using the 4th-order Runge-Kutta-Fehlberg solver with adaptive step-size to integrate the differential equations. The cost function, known as FF4 in the reference work as introduced, is implemented in Python (version 2.7.12) and linked to the NEST simulator. The simulation environment uses a common and fixed random-generation seed to define a stable framework and homogenize the evaluations. In this context, the GA method by Marín

*et al.*(2020) was also implemented in Python 2.7.12 using the standard components provided by the DEAP library (Kim and Yoo, 2019).

##### Table 2

Computational cost (f.e.) | |||||

Method | Parameters | 15k | 22.5k | 30k | 60k |

GA | Population | 1500 | 1500 | 1000 | 1000 |

Cycles | 16 | 25 | 50 | 100 | |

MemeGA | Population | 500 | 500 | 300 | 300 |

Local f.e. | 10 | 12 | 20 | 35 | |

Cycles | 20 | 26 | 40 | 50 | |

DE | Population | 250 | 250 | 200 | 150 |

Cycles | 60 | 90 | 150 | 400 | |

TLBO | Population | 250 | 250 | 200 | 150 |

Cycles | 30 | 45 | 75 | 200 |

*σ*allows for a better local exploration within MemeGA. However, for MSASS, all the parameters of SASS coincide with the recommended values because its execution approach does not have to share resources. That said, every instance of SASS will be stopped after 50 non-improving or failed iterations to save function evaluations for later independent starts. Regarding DE, after preliminary experimentation, it has been configured to use ‘rand’ selection, a single difference vector, and per-generation dither. Notice that the latter aspect avoids adjusting the

*F*parameter of the optimizer and takes into account its potential convergence advantages. Finally, the crossover rate has been fixed to 0.8, which is in the general-purpose range and between the values used in the extension proposed in Cabrera

*et al.*(2011).

### 4.2 Numerical Results

*et al.*(2020) plus the remaining executions to account for 20 cases in total. Finally, to orientate the readers about the real computational cost, it is interesting to mention that completing each cell of 15 000 f.e. approximately takes 9 hours in the experimentation environment and scales accordingly.

##### Table 3

Effort (f.e.) | GA | MemeGA | DE | TLBO | MSASS |

15 000 (50%) | Ave.: 115.38 | Ave.: 113.51 | Ave.: 115.78 | Ave.: 105.76 | Ave.: 106.88 |

STD: 4.19 | STD: 10.83 | STD: 7.38 | STD: 5.90 | STD: 4.50 | |

22 500 (75%) | Ave.: 110.32 | Ave.: 111.46 | Ave.: 112.18 | Ave.: 104.61 | Ave.: 104.71 |

STD: 5.98 | STD: 8.03 | STD: 5.96 | STD: 7.09 | STD: 5.29 | |

30 000 (100%) | Ave.: 108.70 | Ave.: 103.45 | Ave.: 107.98 | Ave.: 98.63 | Ave.: 101.20 |

STD: 6.59 | STD: 4.47 | STD: 4.74 | STD: 5.97 | STD: 3.52 | |

60 000 (200%) | Ave.: 104.17 | Ave.: 100.06 | Ave.: 99.44 | Ave.: 97.67 | Ave.: 100.52 |

STD: 6.70 | STD: 5.60 | STD: 4.08 | STD: 3.37 | STD: 3.04 |

### 4.3 Discussion

*et al.*(2020), the performance of MemeGA reaches the level of those two referred methods, which remain ahead. Finally, with the highest computational cost, MemeGA, DE, TLBO, and MSASS significantly outperform the reference method, and there is no meaningful difference between the four. Thus, the reference method, GA, is always outperformed by at least TLBO and MSASS. MemeGA and DE also tend to separate from it to join the other two at 30 000 and 60 000 f.e., respectively.

### 4.4 Insight into the Best Solution

*et al.*, 2020) with a score of 104.24 is also compared (orange lines). Finally, the

*in vitro*data used to define the fitness function are also represented (black dots) (D’Angelo

*et al.*, 2001; Masoli

*et al.*, 2017).

##### Fig. 2

*in vitro*recordings used in the fitness function (black dots). A) Spiking resonance curves of the models computed as average burst frequencies in response to sinusoidal stimulation of 6 pA (left) and 8 pA (right) with increasing frequencies (in steps of 0.5 Hz). B) Membrane potential evolution of the TLBO model generates spike bursts in response to sinusoidal current injections with offset of 10 pA and amplitude of 6 pA. This is shown after 2 s of stimulation (stabilization). C) Repetitive spike discharge (intensity-frequency curves) of the models computed as the mean frequency in response to step-currents of 1 s. D) Latency to the first spike in response to step-currents of 1 s. E) Membrane potential traces of the TLBO model in response to step-current injections of 1 s with various amplitudes.

*et al.*, 2020). These resonance curves are the graphical representation of doublets, triplets, or longer bursts of spikes generated when stimulated by just-threshold sinusoidal stimulation at different frequencies (Fig. 2(B)). The main behaviour of biological GrCs is the increase of spike frequency when the latency to the first spike decreases as current injections increase. A sample of the neuron behaviour from which these features are calculated is shown in Fig. 2(E). The repetitive spike discharge of the TLBO model is similar to that of the model of reference and in accordance with the experimental measurements in real cells (Fig. 2(C)). The real improvement obtained by the neuron model of the proposed optimizer lies in the first-spike latency feature. The model from the reference work exhibited longer latencies than those experimentally reported, mainly with low stimulation currents (Fig. 2(D)). However, the TLBO model achieves an adjustment in its score of 7.95 ms compared to the 34.95 ms-error obtained by the GA model, both with respect to the

*in vitro*data. Thus, the TLBO optimizer proves not only to be effective in fitting the model parameters to diverse spiking features, but also to improve both the quantitative and qualitative predictions of these supra-threshold characteristics against the methodology of reference (GA).

## 5 Conclusions and Future Work

*in vitro*or

*in vivo*experimental biology is limited. Thus, simulations of sufficiently realistic neuronal network models can become valid to shed light on the functional roles of certain neuronal characteristics or on the interactions that may have various mechanisms among each other.