<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">INFORMATICA</journal-id>
<journal-title-group><journal-title>Informatica</journal-title></journal-title-group>
<issn pub-type="epub">1822-8844</issn><issn pub-type="ppub">0868-4952</issn><issn-l>0868-4952</issn-l>
<publisher>
<publisher-name>Vilnius University</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">INFOR561</article-id>
<article-id pub-id-type="doi">10.15388/24-INFOR561</article-id>
<article-categories><subj-group subj-group-type="heading">
<subject>Research Article</subject></subj-group></article-categories>
<title-group>
<article-title>Identification of the Optimal Neural Network Architecture for Prediction of Bitcoin Return</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<contrib-id contrib-id-type="orcid">https://orcid.org/0000-0002-6279-6070</contrib-id>
<name><surname>Šestanović</surname><given-names>Tea</given-names></name><email xlink:href="tea.sestanovic@efst.hr">tea.sestanovic@efst.hr</email><xref ref-type="aff" rid="j_infor561_aff_001"/><xref ref-type="corresp" rid="cor1">∗</xref><bio>
<p><bold>T. Šestanović</bold> obtained her PhD in economics in 2017. She is currently an assistant professor at University of Split, Faculty of Economics, Business and Tourism. She is teaching statistics and similar courses on all levels of studies, as well as business decision making. She is a president of Croatian Operational Research Society (CRORS) and an editor-in-chief of <italic>Croatian Operational Research Review</italic> (<italic>CRORR</italic>). Her main scientific interests are time series, neural networks, financial modelling and statistics.</p></bio>
</contrib>
<contrib contrib-type="author">
<contrib-id contrib-id-type="orcid">https://orcid.org/0000-0001-7203-4064</contrib-id>
<name><surname>Kalinić Milićević</surname><given-names>Tea</given-names></name><email xlink:href="tea.kalinic@efst.hr">tea.kalinic@efst.hr</email><xref ref-type="aff" rid="j_infor561_aff_001"/><bio>
<p><bold>T. Kalinić Milićević</bold> is a teaching assistant at University of Split, Faculty of Economics, Business and Tourism (FEBT). She graduated in mathematics from University of Split, Faculty of Science, and she finished postgraduate specialist study program in business economics on FEBT. She is teaching mathematics, quantitative methods, financial modelling and actuarial analysis. She is a treasurer at Croatian Operational Research society (CRORS). Her main scientific interests are machine learning models, financial modelling, actuarial science and optimization.</p></bio>
</contrib>
<aff id="j_infor561_aff_001"><institution>University of Split</institution>, Faculty of Economics, Business and Tourism Cvite Fiskovića 5, 21000 Split, <country>Croatia</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>∗</label>Corresponding author.</corresp>
</author-notes>
<pub-date pub-type="ppub"><year>2025</year></pub-date><pub-date pub-type="epub"><day>9</day><month>7</month><year>2024</year></pub-date><volume>36</volume><issue>1</issue><fpage>175</fpage><lpage>196</lpage><history><date date-type="received"><month>2</month><year>2024</year></date><date date-type="accepted"><month>6</month><year>2024</year></date></history>
<permissions><copyright-statement>© 2025 Vilnius University</copyright-statement><copyright-year>2025</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>Open access article under the <ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">CC BY</ext-link> license.</license-p></license></permissions>
<abstract>
<p>Neural networks (NNs) are well established and widely used in time series forecasting due to their frequent dominance over other linear and nonlinear models. Thus, this paper does not question their appropriateness in forecasting cryptocurrency prices; rather, it compares the most commonly used NNs, i.e. feedforward neural networks (FFNNs), long short-term memory (LSTM) and convolutional neural networks (CNNs). This paper contributes to the existing literature by defining the appropriate NN structure comparable across different NN architectures, which yields the optimal NN model for Bitcoin return forecasting. Moreover, by incorporating turbulent events such as COVID and war, this paper emerges as a stress test for NNs. Finally, inputs are carefully selected, mostly covering macroeconomic and market variables, as well as different attractiveness measures, the importance of which in cryptocurrency forecasting is tested. The main results indicate that all NNs perform the best in an environment of bullish market, where CNNs stand out as the optimal models for continuous dataset, and LSTMs emerge as optimal in direction forecasting. In the downturn periods, CNNs stand out as the best models. Additionally, Tweets, as an attractiveness measure, enabled the models to attain superior performance.</p>
</abstract>
<kwd-group>
<label>Key words</label>
<kwd>Bitcoin</kwd>
<kwd>convolutional neural networks</kwd>
<kwd>feedforward neural networks</kwd>
<kwd>long short-term memory</kwd>
<kwd>attractiveness measures</kwd>
</kwd-group>
<funding-group><funding-statement>This work is fully supported by the Croatian Science Foundation (CSF) under the project “Challenges of Alternative Investments” [IP-2019-04-7816].</funding-statement></funding-group>
</article-meta>
</front>
<body>
<sec id="j_infor561_s_001">
<label>1</label>
<title>Introduction</title>
<p>Neural networks have been successfully applied in fields such as finance (Sezer <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_036">2020</xref>), macroeconomics (Šestanović and Arnerić, <xref ref-type="bibr" rid="j_infor561_ref_041">2020</xref>), engineering (Hegde and Rokseth, <xref ref-type="bibr" rid="j_infor561_ref_016">2020</xref>), weather forecasting (Purwandari <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_035">2021</xref>), medicine (Han <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_015">2024</xref>), and many other (Čorić <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_009">2023</xref>). Their forecasting ability has recently been tested on time series data, which exhibit features that have to be taken into account and addressed appropriately (Kalinić Milićević and Marasović, <xref ref-type="bibr" rid="j_infor561_ref_023">2023</xref>; Šestanović, <xref ref-type="bibr" rid="j_infor561_ref_040">2024</xref>). Especially interesting are financial time series data, which are not stationary, they can have seasonal pattern or cyclical behaviour, they are nonlinear, meaning they can exhibit occasional presence of aberrant observations and the possible existence of regimes within which returns display different dynamic behaviour (Franses and van Dijk, <xref ref-type="bibr" rid="j_infor561_ref_013">2003</xref>).</p>
<p>The dynamic behaviour of cryptocurrencies, as financial time series, displays extreme observations, asymmetries, and several nonlinear characteristics that are difficult to model and forecast (Šestanović, <xref ref-type="bibr" rid="j_infor561_ref_040">2024</xref>). Additionally, the importance of cryptocurrency forecasting lies in their constantly increasing financial market, characterized by high volatility and extreme price fluctuations.</p>
<p>Bitcoin prices are highly volatile since they are influenced by a vast number of factors including but not limited to the supply of bitcoins, the cost of the mining process, market demand, as well as political and economic data (Cavalli and Amoretti, <xref ref-type="bibr" rid="j_infor561_ref_007">2021</xref>). Some papers use internal factors for prediction (Polasik <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_033">2015</xref>; Jang and Lee, <xref ref-type="bibr" rid="j_infor561_ref_020">2017</xref>; Sovbetov, <xref ref-type="bibr" rid="j_infor561_ref_037">2018</xref>; Liu and Tsyvinski, <xref ref-type="bibr" rid="j_infor561_ref_026">2020</xref>; Spilak, <xref ref-type="bibr" rid="j_infor561_ref_038">2018</xref>; Fahmi <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_011">2018</xref>; Ji <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_022">2019</xref>; Chen <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_008">2020</xref>; Cavalli and Amoretti, <xref ref-type="bibr" rid="j_infor561_ref_007">2021</xref>), the others use only Open, High, Low, and Close (OHLC) prices (Indera <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_019">2017</xref>; Fahmi <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_011">2018</xref>; Uras <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_044">2020</xref>; Li and Dai, <xref ref-type="bibr" rid="j_infor561_ref_025">2020</xref>), while Azari (<xref ref-type="bibr" rid="j_infor561_ref_004">2019</xref>) and Abu Bakar and Rosbi (<xref ref-type="bibr" rid="j_infor561_ref_001">2017</xref>) use only past closing prices. Technical indicators are also used as predictors (Indera <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_019">2017</xref>; Spilak, <xref ref-type="bibr" rid="j_infor561_ref_038">2018</xref>; Pabuccu <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_030">2020</xref>; Li and Dai, <xref ref-type="bibr" rid="j_infor561_ref_025">2020</xref>; Cavalli and Amoretti, <xref ref-type="bibr" rid="j_infor561_ref_007">2021</xref>). Few papers use macro-finance factors (Polasik <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_033">2015</xref>; Sovbetov, <xref ref-type="bibr" rid="j_infor561_ref_037">2018</xref>; Liu and Tsyvinski, <xref ref-type="bibr" rid="j_infor561_ref_026">2020</xref>; Spilak, <xref ref-type="bibr" rid="j_infor561_ref_038">2018</xref>; Li and Dai, <xref ref-type="bibr" rid="j_infor561_ref_025">2020</xref>; Chen <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_008">2020</xref>) and report the lack of statistical significance if used in parametric models. Contrary, Walther <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor561_ref_045">2019</xref>) found that economic activity is the most important exogenous volatility driver, while the results in Jang and Lee (<xref ref-type="bibr" rid="j_infor561_ref_020">2017</xref>) suggest that macro-financial markets can have a small impact on cryptocurrencies. Moreover, Aljinović <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor561_ref_003">2022</xref>) report significant dynamic conditional correlations between cryptocurrencies and real estate, S&amp;P500 and gold. However, the majority of papers confirm attractiveness as an important factor that influences cryptocurrency prices (Polasik <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_033">2015</xref>; Sovbetov, <xref ref-type="bibr" rid="j_infor561_ref_037">2018</xref>; Li and Dai, <xref ref-type="bibr" rid="j_infor561_ref_025">2020</xref>; Šestanović, <xref ref-type="bibr" rid="j_infor561_ref_039">2021</xref>; Cavalli and Amoretti, <xref ref-type="bibr" rid="j_infor561_ref_007">2021</xref>). This factor is even included among the other indicators in appropriate, comprehensive manner in models for portfolio optimization that include cryptocurrencies (Trimborn <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_043">2019</xref>; Aljinović <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_002">2021</xref>). On the other hand, (Kalinić Milićević and Marasović, <xref ref-type="bibr" rid="j_infor561_ref_023">2023</xref>) report that different models cannot agree upon the importance of Tweets and macro-financial variables in Bitcoin direction forecasting, but show that technical indicators are the most influential, followed by blockchain and market variables. Since different types of attractiveness measures can be found in the literature, the most commonly used measures and their predictive power are compared in this paper, i.e. Google Trends and Tweets.</p>
<p>Previous research has also ambiguous conclusions regarding the appropriate NN model, which calls for further investigation (Cavalli and Amoretti, <xref ref-type="bibr" rid="j_infor561_ref_007">2021</xref>; Zhang <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_046">2021</xref>; Li and Dai, <xref ref-type="bibr" rid="j_infor561_ref_025">2020</xref>; Livieris <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_027">2021</xref>; Lahmiri and Bekiros, <xref ref-type="bibr" rid="j_infor561_ref_024">2019</xref>; Ji <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_022">2019</xref>). Additionally, Lahmiri and Bekiros (<xref ref-type="bibr" rid="j_infor561_ref_024">2019</xref>) revealed that cryptocurrencies exhibit fractal dynamics, long memory and self-similarity. Therefore, an accurate and reliable forecasting model is an essential tool for portfolio managers, which has to be developed and tested.</p>
<p>The feedforward neural networks (FFNNs) are the most popular NN model. Despite their power and proven properties as universal approximators (Hornik <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_017">1989</xref>), FFNNs have limitations that each input (independent variable) and output (dependent variable) are handled independently, i.e. temporal or space information is not incorporated into the model, which is a significant drawback for time series analysis. Recurrent neural networks (RNNs), however, are adapted to time series data as they incorporate recurrent connections from output or hidden layers and the so-called self-connected neuron, which allows learning the temporal dynamics of time series data (Madaeni <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_028">2022</xref>). However, their main problem is capturing long-term dependencies as they suffer from vanishing or exploding gradient. RNN with long short-term memory (LSTM) units has a cell state which enables stable gradients, while the presence of filters can control the information flow (Paranhos, <xref ref-type="bibr" rid="j_infor561_ref_031">2021</xref>). LSTM emerge as a model most commonly used in financial time series prediction (Sezer <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_036">2020</xref>). However, convolutional neural networks (CNNs) have recently challenged the LSTMs in their predictive power when working with sequences. Namely, CNNs use filters, which help them learn spatial features from raw time series data (Fawaz <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_012">2019</xref>).</p>
<p>In this paper, the most commonly used NNs for time series prediction, i.e. FFNNs, LSTM and CNNs, which have proven their forecasting abilities on time series data, are used to forecast Bitcoin returns. The proposed models are compared across different periods, including bullish, bearish and stable market periods, using different performance measures such as means squared error (MSE), accuracy and Diebold-Mariano test, through different inputs and, finally, through different NN architectures. That is, neither input nor NN architecture selection is straightforward and should be chosen with caution. Additionally, Uras <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor561_ref_044">2020</xref>) confirmed that partitioning datasets into shorter sequences, representing different price “regimes”, enables obtaining precise forecasts. Therefore, Bai-Perron multiple structural break test is used.</p>
<p>To sum up, the main contributions of this paper to the current literature are:</p>
<list>
<list-item id="j_infor561_li_001">
<label>•</label>
<p>It defines the appropriate NN structure comparable across different NN architectures, which yields the optimal NN model for Bitcoin’s return forecasting.</p>
</list-item>
<list-item id="j_infor561_li_002">
<label>•</label>
<p>It determines which of the attractiveness measures, already proven in the literature as an important variable for Bitcoin prediction, yields the optimal results.</p>
</list-item>
<list-item id="j_infor561_li_003">
<label>•</label>
<p>It compares the results across different periods based on a non-arbitrary selection of sub-periods using Bai-Perron multiple structural break test and by employing different performance measures that include MSE and accuracy, as well as the Diebold-Mariano test.</p>
</list-item>
</list>
<p>The remainder of the paper is organized as follows. Section <xref rid="j_infor561_s_002">2</xref> provides a literature review of related work, Section <xref rid="j_infor561_s_003">3</xref> describes the proposed methodology, including dataset definition, data preprocessing, a description of neural network architectures, as well as model evaluation criteria. Section <xref rid="j_infor561_s_010">4</xref> presents experimental results with discussion. Finally, conclusions and directions for future research are provided in Section <xref rid="j_infor561_s_015">5</xref>.</p>
</sec>
<sec id="j_infor561_s_002">
<label>2</label>
<title>Related Work</title>
<p>There are papers that compare FFNNs to other linear and nonlinear models and find NNs to have the highest predictive performances (Greaves and Au, <xref ref-type="bibr" rid="j_infor561_ref_014">2015</xref>; Pabuccu <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_030">2020</xref>). Namely, Greaves and Au (<xref ref-type="bibr" rid="j_infor561_ref_014">2015</xref>) compare Support Vector Machine (SVM), Logistic regression (LR), Baseline model and NNs in classification. They obtain the highest classification accuracy of 55.1% with NNs. Pabuccu <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor561_ref_030">2020</xref>) aim to forecast Bitcoin prices applying four different machine learning (ML) methods, i.e. SVM, NNs, Naive Bayes (NB), Random Forest (RF) and LR as a benchmark model. They conclude that in a continuous dataset, RF has the best forecasting performance, while NB, the worse. In a discrete dataset, NN has the best forecasting performance, while NB again lags behind other models. Šestanović (<xref ref-type="bibr" rid="j_infor561_ref_039">2021</xref>) confirmed the ability of simple FNNs with lower number of hidden neurons to accurately predict the Bitcoin price direction, compared both to previous research considered and to LR, while Šestanović (<xref ref-type="bibr" rid="j_infor561_ref_040">2024</xref>) predicted Bitcoin price, returns, direction, and volatility. Return and volatility predictions are stable regardless of model or period. Return and direction prediction is best with NNs. ARIMAX and NNARX models predicted prices effectively. All models predict volatility in a similar way. The price prediction was the most accurate, whereas JNNX showed poor performance. However, these papers did not use any sophisticated machine learning models for prediction, which have been proven in the literature to have superior performance.</p>
<p>Since CNN and LSTM methods have proven their forecasting abilities, more and more research has recently been testing their abilities in new circumstances. Several studies confirm that CNN has superior prediction abilities in comparison to LSTM and other NN architectures (Cavalli and Amoretti, <xref ref-type="bibr" rid="j_infor561_ref_007">2021</xref>; Zhang <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_046">2021</xref>; Šestanović and Kalinić Milićević, <xref ref-type="bibr" rid="j_infor561_ref_042">2023</xref>), while some improve the accuracy by combining the CNN and LSTM (Li and Dai, <xref ref-type="bibr" rid="j_infor561_ref_025">2020</xref>; Livieris <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_027">2021</xref>). Other research confirms that LSTM exhibits superior predictive abilities when compared to various NN architectures (Lahmiri and Bekiros, <xref ref-type="bibr" rid="j_infor561_ref_024">2019</xref>; Ji <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_022">2019</xref>; Spilak, <xref ref-type="bibr" rid="j_infor561_ref_038">2018</xref>). Contrary, Uras <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor561_ref_044">2020</xref>) indicate linear regression models outperform NNs, while in Chen <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor561_ref_008">2020</xref>) LR and Discriminant Analysis outperform more complicated machine learning algorithms. Since the literature does not give a unique answer, this calls for further investigation. Table <xref rid="j_infor561_tab_001">1</xref> provides a brief overview of the key features of related research and a comparison of related work concerning variables, data, models and key findings.</p>
<table-wrap id="j_infor561_tab_001">
<label>Table 1</label>
<caption>
<p>Key features of prior studies.</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Cryptocurrency</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Time period, frequency</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Variables</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Approach</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">The best model</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: left">Greaves and Au (<xref ref-type="bibr" rid="j_infor561_ref_014">2015</xref>)</td>
<td style="vertical-align: top; text-align: left">Bitcoin</td>
<td style="vertical-align: top; text-align: left">2012-02-01 to 2013-04-01 test set. hourly</td>
<td style="vertical-align: top; text-align: left">Internal factors</td>
<td style="vertical-align: top; text-align: left">SVM, LR, NNs, Logistic Regression</td>
<td style="vertical-align: top; text-align: left">NNs</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Spilak (<xref ref-type="bibr" rid="j_infor561_ref_038">2018</xref>)</td>
<td style="vertical-align: top; text-align: left">Bitcoin, Dash, XRP, Monero, Litecoin, Dogecoin, NXT, Namecoin</td>
<td style="vertical-align: top; text-align: left">2014-07 to 2017-10 daily</td>
<td style="vertical-align: top; text-align: left">Internal factors, technical indicators, macroeconomic variables</td>
<td style="vertical-align: top; text-align: left">FFNN, RNN, LSTM</td>
<td style="vertical-align: top; text-align: left">LSTM</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Ji <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor561_ref_022">2019</xref>)</td>
<td style="vertical-align: top; text-align: left">Bitcoin</td>
<td style="vertical-align: top; text-align: left">2011-11-29 to 2018-12-31 daily</td>
<td style="vertical-align: top; text-align: left">Internal factors</td>
<td style="vertical-align: top; text-align: left">DNN, LSTM, CNN, ResNet, and their combinations, and SVM, GRU, linear/logistic R</td>
<td style="vertical-align: top; text-align: left">LSTM</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Lahmiri and Bekiros (<xref ref-type="bibr" rid="j_infor561_ref_024">2019</xref>)</td>
<td style="vertical-align: top; text-align: left">Bitcoin, Bitcoin Cash and XRP</td>
<td style="vertical-align: top; text-align: left">Bitcoin: 2010-07-16 to 2018-10-01, Digital Cash: 2010-02-08 to 2018-10-01, Ripple: 2015-01-21 to 2018-10-01 daily</td>
<td style="vertical-align: top; text-align: left">Internal factors</td>
<td style="vertical-align: top; text-align: left">LSTM, GRNN</td>
<td style="vertical-align: top; text-align: left">LSTM</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Chen <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor561_ref_008">2020</xref>)</td>
<td style="vertical-align: top; text-align: left">Bitcoin</td>
<td style="vertical-align: top; text-align: left">2017-02-02 to 2019-02-01 daily</td>
<td style="vertical-align: top; text-align: left">Internal factors, sentiment analysis variables, macroeconomic factors</td>
<td style="vertical-align: top; text-align: left">Logistic Regression, DA, RF, XGBoost, Quadratic DA, SVM, LSTM</td>
<td style="vertical-align: top; text-align: left">LR and DA</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Li and Dai (<xref ref-type="bibr" rid="j_infor561_ref_025">2020</xref>)</td>
<td style="vertical-align: top; text-align: left">Bitcoin</td>
<td style="vertical-align: top; text-align: left">2016-12-30 to 2018-08-01 daily</td>
<td style="vertical-align: top; text-align: left">Internal factors, technical indicators, macroeconomic variables, sentiment analysis variables</td>
<td style="vertical-align: top; text-align: left">BPNN, CNN, LSTM, CNN-LSTM</td>
<td style="vertical-align: top; text-align: left">CNN-LSTM</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Pabuccu <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor561_ref_030">2020</xref>)</td>
<td style="vertical-align: top; text-align: left">Bitcoin</td>
<td style="vertical-align: top; text-align: left">2008 to 2019 daily</td>
<td style="vertical-align: top; text-align: left">Internal factors, technical indicators</td>
<td style="vertical-align: top; text-align: left">SVM, NNs, NB, RF, Logistic Regression</td>
<td style="vertical-align: top; text-align: left">RF (continuous dataset), NNs (discrete dataset)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Uras <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor561_ref_044">2020</xref>)</td>
<td style="vertical-align: top; text-align: left">Bitcoin, Litecoin and Ether</td>
<td style="vertical-align: top; text-align: left">2015-11-15 to 2020-03-12 daily</td>
<td style="vertical-align: top; text-align: left">Internal factors</td>
<td style="vertical-align: top; text-align: left">MLR, FFNN, LSTM</td>
<td style="vertical-align: top; text-align: left">LR</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Cavalli and Amoretti (<xref ref-type="bibr" rid="j_infor561_ref_007">2021</xref>)</td>
<td style="vertical-align: top; text-align: left">Bitcoin</td>
<td style="vertical-align: top; text-align: left">2013-04-28 to 2020-02-15 daily</td>
<td style="vertical-align: top; text-align: left">Internal factors, technical indicators, sentiment analysis variables</td>
<td style="vertical-align: top; text-align: left">CNN, LSTM</td>
<td style="vertical-align: top; text-align: left">CNN</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Livieris <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor561_ref_027">2021</xref>)</td>
<td style="vertical-align: top; text-align: left">Bitcoin, Ether and XRP</td>
<td style="vertical-align: top; text-align: left">2017-01-01 to 2020-10-31 daily</td>
<td style="vertical-align: top; text-align: left">Internal factors,</td>
<td style="vertical-align: top; text-align: left">Three CNN-LSTM models based on different sets of inputs</td>
<td style="vertical-align: top; text-align: left">MICDL</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Zhang <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor561_ref_046">2021</xref>)</td>
<td style="vertical-align: top; text-align: left">Bitcoin, Bitcoin Cash, Litecoin, Ether, EOS, and XRP</td>
<td style="vertical-align: top; text-align: left">2017-07-23 to 2020-07-15 daily</td>
<td style="vertical-align: top; text-align: left">Internal factors</td>
<td style="vertical-align: top; text-align: left">ARIMA, RF, XGBoost, MLP, LSTM, CNN, GRU, SVM</td>
<td style="vertical-align: top; text-align: left">CNN</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Šestanović (<xref ref-type="bibr" rid="j_infor561_ref_039">2021</xref>)</td>
<td style="vertical-align: top; text-align: left">Bitcoin</td>
<td style="vertical-align: top; text-align: left">2016-04 to 2021-04</td>
<td style="vertical-align: top; text-align: left">Internal factors, macroeconomic factors, sentiment analysis variables</td>
<td style="vertical-align: top; text-align: left">Logistic Regression, FFNN</td>
<td style="vertical-align: top; text-align: left">FFNN</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Jaquart <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor561_ref_021">2022</xref>)</td>
<td style="vertical-align: top; text-align: left">100 cryprocurrencies</td>
<td style="vertical-align: top; text-align: left">2018-02-08 to 2022-05-15 daily</td>
<td style="vertical-align: top; text-align: left">Internal factors</td>
<td style="vertical-align: top; text-align: left">LSTM, GRU, TCN, GB, RF, LR</td>
<td style="vertical-align: top; text-align: left">GRU, LSTM</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Šestanović and Kalinić Milićević (<xref ref-type="bibr" rid="j_infor561_ref_042">2023</xref>)</td>
<td style="vertical-align: top; text-align: left">Bitcoin</td>
<td style="vertical-align: top; text-align: left">2017-07-05 to 2022-01-01 daily</td>
<td style="vertical-align: top; text-align: left">Internal factors, sentimenti analysis variables, macroeconomic factors</td>
<td style="vertical-align: top; text-align: left">FFNN, CNN, LSTM</td>
<td style="vertical-align: top; text-align: left">CNN</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Šestanović (<xref ref-type="bibr" rid="j_infor561_ref_040">2024</xref>)</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Bitcoin</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">2016-04-09 to 2021-04-09 daily</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Internal factors, macroeconomic factors, sentiment analysis variables</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">ARIMAX, NNARX, JNNX, GARCH, NNAR, JNN, FNN, Logistic Regression</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">NNs for return and direction forecasting, ARIMAX and NNARX for price forecasting</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Note</italic>: Autoregressive Integrated Moving Average (ARIMA), ARIMA with exogenous inputs (ARIMAX), Back Propagation Neural Network (BPNN), Discriminant Analysis (DA), Convolutional Neural Network (CNN), Deep Neural Network (DNN), Deep Residual Network (ResNet) Extreme Gradient Boosting (XGBoost), Feed Forward Neural Network (FFNN), Gated Recurrent Units (GRUs), General Regression Neural Network (GRNN), Generalized Autoregressive Conditional Heteroskedasticity (GARCH) Jordan Neural Networks (JNN), Jordan Neural Networks with exogenous inputs (JNNX), Linear Regression (LR), Long Short-Term Memory (LSTM), Multiple-Input Cryptocurrency Deep Learning Model (MICDL), Multilayer Perceptron (MLP), Multiple Linear Regression (MLR), Naive Bayes (NB), Neural Networks (NNs), Neural Network Autoregression (NNAR), Neural Network Autoregression with Exogenous Input (NNARX), Random Forest (RF), Recurrent Neural Network (RNN), Support Vector Machine (SVM), Temporal Convolutional Networks (TCN).</p>
</table-wrap-foot>
</table-wrap>
<p>Cavalli and Amoretti (<xref ref-type="bibr" rid="j_infor561_ref_007">2021</xref>) predict Bitcoin direction with One-Dimensional (1D) CNN and demonstrate, using large datasets collected in a cloud-based system, that the 1D CNN allows for the prediction of the Bitcoin trend with higher accuracy compared to LSTM models.</p>
<p>Among other papers that combine different deep neural network architectures is Livieris <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor561_ref_027">2021</xref>) who proposed a multi-input deep learning (MICDL) model based on CNN-LSTM approach for predicting prices of Bitcoin, Ether and XRP. The proposed model is compared to two CNN-LSTM models: model trained with only one cryptocurrency and model trained with all three cryptocurrencies. The utilization of all cryptocurrencies in the training data of the MICDL yielded a forecasting model with the best return and direction predictions.</p>
<p>Motivated by the high correlations among different cryptocurrencies as well as the powerful modelling efficiency exhibited by DL models, Zhang <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor561_ref_046">2021</xref>) propose a CNN-based Weighted and Attentive Memory Channels model to predict the daily closing price of cryptocurrencies. The results indicate that the proposed model outperforms the baseline models (ARIMA, RF, XGBoost, MLP, LSTM, CNN, GRU, SVM) in predictive performances. The hyperparameter setting of baseline models is chosen by default. Additionally, this paper does not use any other inputs.</p>
<p>Li and Dai (<xref ref-type="bibr" rid="j_infor561_ref_025">2020</xref>) propose a hybrid NN model based on CNN and LSTM. CNN is used for feature extraction, which become inputs to LSTM for training and prediction of the Bitcoin price. They conclude that CNN-LSTM can effectively improve the accuracy of both value and direction prediction compared with simple NNs.</p>
<p>Uras <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor561_ref_044">2020</xref>) forecast daily closing prices of Bitcoin, Litecoin and Ether, using Simple and Multiple Linear Regression model (RM), as well as FFNN and LSTM models. The best results were found with RM and LSTM models. However, the linear RMs outperform NNs.</p>
<p>Chen <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor561_ref_008">2020</xref>) predict Bitcoin price at daily and high-frequency intervals. LR and Discriminant Analysis (DA) achieve an accuracy of 66%, outperforming more complex machine learning (ML) models. ML models include RF, XGBoost, Quadratic DA, SVM and LSTM.</p>
<p>Lahmiri and Bekiros (<xref ref-type="bibr" rid="j_infor561_ref_024">2019</xref>) implement LSTM and generalized regression NN (GRNN) to forecast the prices of Bitcoin, Bitcoin Cash and XRP. The predictability of LSTM is significantly higher compared to the GRNN. LSTM are proved highly efficient in forecasting cryptocurrency prices.</p>
<p>Ji <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor561_ref_022">2019</xref>) compare deep NN (DNN), LSTM, CNN, deep residual network, and their combinations for Bitcoin price prediction, as well as SVM, GRU and linear/logistic RMs which performed worse or equally to SVM. They conclude that LSTM slightly outperforms the other models. Moreover, DNN performed the best in the classification problem.</p>
<p>Spilak (<xref ref-type="bibr" rid="j_infor561_ref_038">2018</xref>) uses FFNN, RNN and LSTM in classification tasks to predict price directions of 8 major cryptocurrencies with a rolling window RM. The study reveals that LSTM has the highest accuracy for direction prediction of the most important cryptocurrencies, FFNN has the best generalization power for three cryptocurrencies, while RNN shows poor prediction performances, seemingly failing to extract the necessary information.</p>
<p>Jaquart <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor561_ref_021">2022</xref>) train models to predict binary relative daily market movements of the 100 largest cryptocurrencies. They use only daily closing prices and market capitalization data. GRU and LSTM models perform the best, as portfolios based on these models’ predictions yield the highest performances.</p>
<p>Šestanović and Kalinić Milićević (<xref ref-type="bibr" rid="j_infor561_ref_042">2023</xref>) estimated FFNN, CNN and LSTM models in the downturn period for Bitcoin return prediction by applying the multi-criteria decision-making approach for model selection. They concluded that optimal model is CNN.</p>
<p>In line with previous research findings that the majority of optimal models are constructed using CNN and LSTM, this article presents a comprehensive approach to forecasting Bitcoin returns using FFNN, CNN and LSTM. The employed NN architectures are thoroughly explained and compared across various architectures and periods in order to reach a conclusion regarding the important NN configurations. Namely, this paper emerges as a stress test for NN architectures, since it tests the abilities of more sophisticated NN architectures in different sub-periods, which include bullish, bearish and stable market conditions, obtained in a logical way, i.e. using Bai-Perron structural break test. Previous researches do not include downturn periods in their analysis, which are usually more difficult to predict. The results are compared through different performance measures and tested using Diebold-Mariano test. Additionally, although some previous researches use sentiment analysis variable, they do not compare their performances. Finally, although other papers sometimes use even more inputs for prediction, they narrow them down to the use of only technical indicators or internal factors, while in this paper the representatives of important factors are used in a comprehensive manner.</p>
</sec>
<sec id="j_infor561_s_003">
<label>3</label>
<title>Proposed Methodology</title>
<sec id="j_infor561_s_004">
<label>3.1</label>
<title>Dataset Definition, Preprocessing and Partitioning</title>
<p>In order to create the initial dataset, different types of factors are extracted. They can be divided into four main categories: internal factors, technical indicators, external factors, and attractiveness measures. The considered factors are given in Table <xref rid="j_infor561_tab_002">2</xref>.</p>
<table-wrap id="j_infor561_tab_002">
<label>Table 2</label>
<caption>
<p>Factors considered in analysis.</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Category</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Variables</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Source</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: left">Internal factors</td>
<td style="vertical-align: top; text-align: left">Close price, volume, market capitalization, average block size, average block time, average hash rate, average transaction fee</td>
<td style="vertical-align: top; text-align: left"><uri>https://bitinfocharts.com</uri></td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Technical indicators</td>
<td style="vertical-align: top; text-align: left">Moving average of close price, lag return</td>
<td style="vertical-align: top; text-align: left">Calculated</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">External factors</td>
<td style="vertical-align: top; text-align: left">S&amp;P500, VIX, Gold</td>
<td style="vertical-align: top; text-align: left"><uri>https://fred.stlouisfed.org</uri></td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Attractiveness measure</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Google Trends, Tweets</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><uri>https://bitinfocharts.com</uri></td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>Note: Database is available at: <uri>https://github.com/TKalini/Sestanovic-Kalinic-Milicevic-2024/blob/main/Database_SestanovicKalinicMilicevic.csv</uri>.</p>
</table-wrap-foot>
</table-wrap>
<p>Market and blockchain variables are differentiated among internal factors. Market data usually include OHLC Bitcoin prices, volume, and market capitalization. Since Bitcoin prices are not affected by seasonality like stock prices, the open, high, and low Bitcoin prices are excluded from our analysis. In that manner, the dimensionality problem is avoided. Average block size, average block time, average hash rate, and average transaction fee are selected from a set of available blockchain metrics. Namely, Poyser (<xref ref-type="bibr" rid="j_infor561_ref_034">2017</xref>) define supply and demand (transaction cost, reward system, mining difficulty, coins circulation, rule changes), i.e. blockchain measures, as the main internal factors that have a direct impact on their market price. Since previous research involving different technical indicators yielded good results (Pabuccu <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_030">2020</xref>; Li and Dai, <xref ref-type="bibr" rid="j_infor561_ref_025">2020</xref>; Cavalli and Amoretti, <xref ref-type="bibr" rid="j_infor561_ref_007">2021</xref>), the moving average of the close price, as well as the lag return are selected as inputs as well. Although researchers disagree on the impact of external factors on Bitcoin prices and returns, the following external factors: S&amp;P500, Chicago Board Options Exchange volatility index (VIX), and Gold prices are considered in this paper. Finally, widely utilized indicators of attractiveness such as Google Trends are used. Google Trends provides insights into the fluctuation of interest in Bitcoin as a search term over a certain timeframe, while Tweets indicate the daily count of tweets using the word “Bitcoin”. Bitcoin closing prices are used to calculate the return for the following day, which is the dependent variable in the model (Šestanović and Kalinić Milićević, <xref ref-type="bibr" rid="j_infor561_ref_042">2023</xref>).</p>
<p>More formally, the next-day return <inline-formula id="j_infor561_ineq_001"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${R_{t}}$]]></tex-math></alternatives></inline-formula>, is calculated as follows: 
<disp-formula id="j_infor561_eq_001">
<label>(1)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mo movablelimits="false">ln</mml:mo><mml:mstyle displaystyle="true">
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
<mml:mo mathvariant="normal">,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ {R_{t}}=\ln \frac{{P_{t+1}}}{{P_{t}}},\]]]></tex-math></alternatives>
</disp-formula> 
where <inline-formula id="j_infor561_ineq_002"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${P_{t}}$]]></tex-math></alternatives></inline-formula> are Bitcoin closing prices. Moreover, prices <inline-formula id="j_infor561_ineq_003"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${P_{t}}$]]></tex-math></alternatives></inline-formula> are used to calculate lag returns as well as moving average values, for different window lengths <italic>w</italic> with equations (<xref rid="j_infor561_eq_002">2</xref>) and (<xref rid="j_infor561_eq_003">3</xref>): <disp-formula-group id="j_infor561_dg_001">
<disp-formula id="j_infor561_eq_002">
<label>(2)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true" columnalign="right left" columnspacing="0pt">
<mml:mtr>
<mml:mtd class="align-odd"/>
<mml:mtd class="align-even">
<mml:mtext mathvariant="italic">Lag</mml:mtext>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mo movablelimits="false">ln</mml:mo><mml:mstyle displaystyle="true">
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
<mml:mo mathvariant="normal">,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[\begin{aligned}{}& \textit{Lag}{R_{t}}=\ln \frac{{P_{t}}}{{P_{t-1}}},\end{aligned}\]]]></tex-math></alternatives>
</disp-formula>
<disp-formula id="j_infor561_eq_003">
<label>(3)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true" columnalign="right left" columnspacing="0pt">
<mml:mtr>
<mml:mtd class="align-odd"/>
<mml:mtd class="align-even">
<mml:msub>
<mml:mrow>
<mml:mtext mathvariant="italic">MA</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo><mml:mstyle displaystyle="true">
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">w</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:mo largeop="true" movablelimits="false">∑</mml:mo></mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mspace width="0.1667em"/>
<mml:mo>=</mml:mo>
<mml:mspace width="0.1667em"/>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo>−</mml:mo>
<mml:mi mathvariant="italic">w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[\begin{aligned}{}& {\textit{MA}_{t}}=\frac{1}{w}{\sum \limits_{i\hspace{0.1667em}=\hspace{0.1667em}t-w}^{t}}{P_{i}}.\end{aligned}\]]]></tex-math></alternatives>
</disp-formula>
</disp-formula-group> Using the aforementioned variables, i.e. different internal factors, technical indicators, external factors, and based on two attractiveness measures, i.e. Google Trends and Tweets, two initial (<inline-formula id="j_infor561_ineq_004"><alternatives><mml:math>
<mml:mtext mathvariant="italic">IN</mml:mtext></mml:math><tex-math><![CDATA[$\textit{IN}$]]></tex-math></alternatives></inline-formula>) datasets are created: <inline-formula id="j_infor561_ineq_005"><alternatives><mml:math>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="italic">D</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mtext mathvariant="italic">IN</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup></mml:math><tex-math><![CDATA[${D_{\textit{IN}}^{1}}$]]></tex-math></alternatives></inline-formula> containing all the variables and the variable Google Trend and <inline-formula id="j_infor561_ineq_006"><alternatives><mml:math>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="italic">D</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mtext mathvariant="italic">IN</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup></mml:math><tex-math><![CDATA[${D_{\textit{IN}}^{2}}$]]></tex-math></alternatives></inline-formula> containing all the variables and the variable Tweets. Those sets were used to compare the predictive performance of the models depending on the selected measures of attractiveness.</p>
<p>More formally, initial datasets can be defined in the form of supervised data as <inline-formula id="j_infor561_ineq_007"><alternatives><mml:math>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="italic">D</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mtext mathvariant="italic">IN</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo fence="true" stretchy="false">}</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo stretchy="false">∈</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="italic">T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${D_{\textit{IN}}^{1}}={\{({x_{t,i}},{y_{t}})\}_{t\in {T^{\prime }}}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_infor561_ineq_008"><alternatives><mml:math>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="italic">D</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mtext mathvariant="italic">IN</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">z</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo fence="true" stretchy="false">}</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo stretchy="false">∈</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="italic">T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${D_{\textit{IN}}^{2}}={\{({z_{t,i}},{y_{t}})\}_{t\in {T^{\prime }}}}$]]></tex-math></alternatives></inline-formula>, where <inline-formula id="j_infor561_ineq_009"><alternatives><mml:math>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="italic">T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[${T^{\prime }}$]]></tex-math></alternatives></inline-formula> is the total initial sample size, <inline-formula id="j_infor561_ineq_010"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">∈</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold">R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">p</mml:mi>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[${x_{t}}\in {\mathbf{R}^{p}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_infor561_ineq_011"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">z</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">∈</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold">R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">p</mml:mi>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[${z_{t}}\in {\mathbf{R}^{p}}$]]></tex-math></alternatives></inline-formula> are vectors of <inline-formula id="j_infor561_ineq_012"><alternatives><mml:math>
<mml:mi mathvariant="italic">p</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>14</mml:mn></mml:math><tex-math><![CDATA[$p=14$]]></tex-math></alternatives></inline-formula> (<inline-formula id="j_infor561_ineq_013"><alternatives><mml:math>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mo>…</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">p</mml:mi></mml:math><tex-math><![CDATA[$i=1,\dots ,p$]]></tex-math></alternatives></inline-formula>) independent variables differing only by the attractiveness measure, and <inline-formula id="j_infor561_ineq_014"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">∈</mml:mo>
<mml:mi mathvariant="bold">R</mml:mi></mml:math><tex-math><![CDATA[${y_{t}}={R_{t}}\in \mathbf{R}$]]></tex-math></alternatives></inline-formula> is a dependent variable, i.e. next-day return.</p>
<p>The data-preprocessing phase of analysis consists of dealing with missing data, calculating the percentage changes, and scaling the data. The selected period does not include an abundance of data gaps that would reflect negatively on the quality of the chosen data collection. Missing values were found at some points in time, and were mostly related to external factors. Linear interpolation was used to fill in the gaps.</p>
<p>In order to improve training efficacy and NN convergence, the actual values for the majority of independent variables were replaced with percentage changes. For all the observed variables except lag returns <inline-formula id="j_infor561_ineq_015"><alternatives><mml:math>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$(i=1)$]]></tex-math></alternatives></inline-formula>, existing values are replaced with corresponding percentage changes, i.e. <inline-formula id="j_infor561_ineq_016"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">←</mml:mo><mml:mstyle displaystyle="false">
<mml:mfrac>
<mml:mrow>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>−</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mstyle></mml:math><tex-math><![CDATA[${x_{t,i}}\gets \frac{({x_{t,i}}-{x_{t-1,i}})}{{x_{t,i}}}$]]></tex-math></alternatives></inline-formula>, <inline-formula id="j_infor561_ineq_017"><alternatives><mml:math>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo stretchy="false">∈</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="italic">T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[$t\in {T^{\prime }}$]]></tex-math></alternatives></inline-formula>, <inline-formula id="j_infor561_ineq_018"><alternatives><mml:math>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mo>…</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">p</mml:mi></mml:math><tex-math><![CDATA[$i=2,\dots ,p$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_infor561_ineq_019"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">z</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">←</mml:mo><mml:mstyle displaystyle="false">
<mml:mfrac>
<mml:mrow>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">z</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>−</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">z</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">z</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mstyle></mml:math><tex-math><![CDATA[${z_{t,i}}\gets \frac{({z_{t,i}}-{z_{t-1,i}})}{{z_{t,i}}}$]]></tex-math></alternatives></inline-formula>, <inline-formula id="j_infor561_ineq_020"><alternatives><mml:math>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo stretchy="false">∈</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="italic">T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[$t\in {T^{\prime }}$]]></tex-math></alternatives></inline-formula>, <inline-formula id="j_infor561_ineq_021"><alternatives><mml:math>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mo>…</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">p</mml:mi></mml:math><tex-math><![CDATA[$i=2,\dots ,p$]]></tex-math></alternatives></inline-formula>.</p>
<p>Following the preparation of the initial datasets <inline-formula id="j_infor561_ineq_022"><alternatives><mml:math>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="italic">D</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mtext mathvariant="italic">IN</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup></mml:math><tex-math><![CDATA[${D_{\textit{IN}}^{1}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_infor561_ineq_023"><alternatives><mml:math>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="italic">D</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mtext mathvariant="italic">IN</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup></mml:math><tex-math><![CDATA[${D_{\textit{IN}}^{2}}$]]></tex-math></alternatives></inline-formula>, each of final dataset <inline-formula id="j_infor561_ineq_024"><alternatives><mml:math>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="italic">D</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo fence="true" stretchy="false">}</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo stretchy="false">∈</mml:mo>
<mml:mi mathvariant="italic">T</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${D^{1}}={\{({x_{t,i}},{y_{t}})\}_{t\in T}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_infor561_ineq_025"><alternatives><mml:math>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="italic">D</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">z</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo fence="true" stretchy="false">}</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo stretchy="false">∈</mml:mo>
<mml:mi mathvariant="italic">T</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${D^{2}}={\{({z_{t,i}},{y_{t}})\}_{t\in T}}$]]></tex-math></alternatives></inline-formula> for <inline-formula id="j_infor561_ineq_026"><alternatives><mml:math>
<mml:mi mathvariant="italic">T</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mo>…</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>2337</mml:mn></mml:math><tex-math><![CDATA[$T=0,1,\dots ,2337$]]></tex-math></alternatives></inline-formula> included in our analysis contains a total of 2338 points. Indices <inline-formula id="j_infor561_ineq_027"><alternatives><mml:math>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>0</mml:mn></mml:math><tex-math><![CDATA[$t=0$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_infor561_ineq_028"><alternatives><mml:math>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>2337</mml:mn></mml:math><tex-math><![CDATA[$t=2337$]]></tex-math></alternatives></inline-formula> correspond to the dates 2016-01-06 and 2022-05-31, respectively. To conclude this step of preprocessing, given that machine learning algorithms perform better with scaled data, the min-max scaler, which rescales variables into the <inline-formula id="j_infor561_ineq_029"><alternatives><mml:math>
<mml:mo fence="true" stretchy="false">[</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo fence="true" stretchy="false">]</mml:mo></mml:math><tex-math><![CDATA[$[0,1]$]]></tex-math></alternatives></inline-formula> range, is used.</p>
<p>In the majority of research papers examining Bitcoin price and return forecasting, the dataset is split into a training set and a testing set in a predetermined proportion. On the contrary, in this paper, the breakpoints in the Bitcoin price trend are detected. The model is then trained on the period preceding the trend break and tested on the part of period following the breakpoint for each breakpoint. In this way, the ability of a NN model to make predictions in a new environment which is set up in an objective and unbiased manner can be tested. The Bai-Perron structural break change test is used to detect structural breaks. Bai and Perron (<xref ref-type="bibr" rid="j_infor561_ref_005">1998</xref>) considered issues associated with multiple structural changes in the linear regression model derived by minimizing the sum of squared residuals. Throughout, the dates of the <italic>m</italic> breaks were treated as unknown variables that needed to be estimated. The primary considerations are the features of the estimators, particularly the break date estimates, and the design of tests that provide inferences about the presence of structural change and the number of breaks. This test is employed to identify breakpoints, resulting in the identification of indices of five dates <inline-formula id="j_infor561_ineq_030"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mo>…</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">∈</mml:mo>
<mml:mi mathvariant="italic">T</mml:mi></mml:math><tex-math><![CDATA[${T_{1}},{T_{2}},\dots ,{T_{5}}\in T$]]></tex-math></alternatives></inline-formula> at which the test recognized a structural change. For each <inline-formula id="j_infor561_ineq_031"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">k</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">k</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mo>…</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>5</mml:mn></mml:math><tex-math><![CDATA[${T_{k}},k=1,\dots ,5$]]></tex-math></alternatives></inline-formula> two sets are defined:</p>
<list>
<list-item id="j_infor561_li_004">
<label>1.</label>
<p>Train set <inline-formula id="j_infor561_ineq_032"><alternatives><mml:math>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="italic">D</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mtext mathvariant="italic">train</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">k</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo stretchy="false">|</mml:mo>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo stretchy="false">∈</mml:mo>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mo>…</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">k</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo fence="true" stretchy="false">}</mml:mo>
<mml:mo fence="true" stretchy="false">}</mml:mo>
<mml:mo stretchy="false">⊂</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="italic">D</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[${D_{\textit{train}}^{k}}\{({x_{t,i}},{y_{t}})|t\in \{0,1,\dots ,{T_{k}}\}\}\subset {D^{1}}$]]></tex-math></alternatives></inline-formula> and</p>
</list-item>
<list-item id="j_infor561_li_005">
<label>2.</label>
<p>Test set <inline-formula id="j_infor561_ineq_033"><alternatives><mml:math>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="italic">D</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mtext mathvariant="italic">test</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">k</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo stretchy="false">|</mml:mo>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo stretchy="false">∈</mml:mo>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">k</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mo>…</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">k</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mi mathvariant="italic">n</mml:mi>
<mml:mo fence="true" stretchy="false">}</mml:mo>
<mml:mo fence="true" stretchy="false">}</mml:mo>
<mml:mo stretchy="false">⊂</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="italic">D</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[${D_{\textit{test}}^{k}}\{({x_{t,i}},{y_{t}})|t\in \{{T_{k}}+1,\dots ,{T_{k}}+n\}\}\subset {D^{1}}$]]></tex-math></alternatives></inline-formula>, where <inline-formula id="j_infor561_ineq_034"><alternatives><mml:math>
<mml:mi mathvariant="italic">n</mml:mi>
<mml:mo>=</mml:mo><mml:mstyle displaystyle="false">
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">k</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>·</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mn>100</mml:mn>
</mml:mrow>
</mml:mfrac>
</mml:mstyle></mml:math><tex-math><![CDATA[$n=\frac{{T_{k}}\cdot 5}{100}$]]></tex-math></alternatives></inline-formula>.</p>
</list-item>
</list>
<p>Namely, each subset, or training set, is followed by a test set. The size of each test set corresponds to five percent of the size of the associated train set.</p>
<p>Table <xref rid="j_infor561_tab_003">3</xref> displays the time interval and number of observations for each partition. Four out of five partitions (subsets) are used to build models with different NN architecture. Due to the low quantity of data points, the first partition is excluded from the study.</p>
<table-wrap id="j_infor561_tab_003">
<label>Table 3</label>
<caption>
<p>Dataset partitions obtained with Bai-Perron multiple structural break test.</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Subset 1</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Subset 2</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Subset 3</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Subset 4</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Subset 5</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: left">Start date</td>
<td style="vertical-align: top; text-align: left">2016-01-06</td>
<td style="vertical-align: top; text-align: left">2016-01-06</td>
<td style="vertical-align: top; text-align: left">2016-01-06</td>
<td style="vertical-align: top; text-align: left">2016-01-06</td>
<td style="vertical-align: top; text-align: left">2016-01-06</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Close date</td>
<td style="vertical-align: top; text-align: left">2016-12-27</td>
<td style="vertical-align: top; text-align: left">2017-12-15</td>
<td style="vertical-align: top; text-align: left">2019-03-25</td>
<td style="vertical-align: top; text-align: left">2020-03-11</td>
<td style="vertical-align: top; text-align: left">2021-03-12</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Number of data points</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">357</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">710</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">1175</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">1527</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">1893</td>
</tr>
</tbody>
</table>
</table-wrap>
<fig id="j_infor561_fig_001">
<label>Fig. 1</label>
<caption>
<p>Flow chart of close price and return within train and test sets for observed partitions.</p>
</caption>
<graphic xlink:href="infor561_g001.jpg"/>
</fig>
<p>A graphical representation of price movements and next-day returns within the training and testing set for each observed partition are shown in Fig. <xref rid="j_infor561_fig_001">1</xref>.</p>
</sec>
<sec id="j_infor561_s_005">
<label>3.2</label>
<title>Neural Network Architectures</title>
<p>In this subsection, the structure of neural networks, as well as the evaluation methods for models is described. For the sake of simplicity, the preceding will be explained for set <inline-formula id="j_infor561_ineq_035"><alternatives><mml:math>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="italic">D</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[${D^{1}}$]]></tex-math></alternatives></inline-formula>, whereas in the experiment, the same was also done for the set <inline-formula id="j_infor561_ineq_036"><alternatives><mml:math>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="italic">D</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[${D^{2}}$]]></tex-math></alternatives></inline-formula>.</p>
<sec id="j_infor561_s_006">
<label>3.2.1</label>
<title>Feedforward Neural Networks</title>
<fig id="j_infor561_fig_002">
<label>Fig. 1</label>
<caption>
<p>(<italic>continued</italic>)</p>
</caption>
<graphic xlink:href="infor561_g002.jpg"/>
</fig>
<p>Feedforward neural networks (FFNNs) are the most commonly used NNs. They consist of three layers: input, hidden, and output. Inputs and outputs are the independent and dependent variables predefined by the researcher, while hidden neurons are one of hyperparameters that have to be fine-tuned. The unknown parameters (weights) are estimated using the backpropagation (BP) learning algorithm. FFNN can be written as follows: 
<disp-formula id="j_infor561_eq_004">
<label>(4)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" maxsize="2.45em" minsize="2.45em">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
<mml:mi mathvariant="italic">o</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:mo largeop="true" movablelimits="false">∑</mml:mo></mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">h</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">q</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">h</mml:mi>
<mml:mi mathvariant="italic">o</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" maxsize="2.45em" minsize="2.45em">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
<mml:mi mathvariant="italic">h</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:mo largeop="true" movablelimits="false">∑</mml:mo></mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">p</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mi mathvariant="italic">h</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" maxsize="2.45em" minsize="2.45em">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" maxsize="2.45em" minsize="2.45em">)</mml:mo>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">ε</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ {y_{t}}={\sigma _{1}}\Bigg({w_{co}}+{\sum \limits_{h=1}^{q}}{w_{ho}}{\sigma _{2}}\Bigg({w_{ch}}+{\sum \limits_{i=1}^{p}}{w_{ih}}{x_{t,i}}\Bigg)\Bigg)+{\varepsilon _{t}},\]]]></tex-math></alternatives>
</disp-formula> 
where <inline-formula id="j_infor561_ineq_037"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${y_{t}}$]]></tex-math></alternatives></inline-formula> is the output vector of a time series, <inline-formula id="j_infor561_ineq_038"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${x_{t,i}}$]]></tex-math></alternatives></inline-formula> is the input matrix with <italic>p</italic> variables, while <inline-formula id="j_infor561_ineq_039"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mo>·</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[${\sigma _{1}}(\cdot )$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_infor561_ineq_040"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mo>·</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[${\sigma _{2}}(\cdot )$]]></tex-math></alternatives></inline-formula> are the activation functions in output and hidden layer respectively, which can be sigmoid, hyperbolic tangent and/or linear. Weights <inline-formula id="j_infor561_ineq_041"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
<mml:mi mathvariant="italic">o</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${w_{co}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_infor561_ineq_042"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
<mml:mi mathvariant="italic">h</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${w_{ch}}$]]></tex-math></alternatives></inline-formula> are constant terms of output and hidden neurons respectively. Weights <inline-formula id="j_infor561_ineq_043"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mi mathvariant="italic">h</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${w_{ih}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_infor561_ineq_044"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">h</mml:mi>
<mml:mi mathvariant="italic">o</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${w_{ho}}$]]></tex-math></alternatives></inline-formula> are the connections between inputs and hidden neurons and between hidden neurons and output respectively. <inline-formula id="j_infor561_ineq_045"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">ε</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\varepsilon _{t}}$]]></tex-math></alternatives></inline-formula> is an error term.</p>
</sec>
<sec id="j_infor561_s_007">
<label>3.2.2</label>
<title>Long Short-Term Memory</title>
<p>Long Short-Term Memory (LSTM) merges LSTM units, composed of cells, which have input, output and forget gate to control the information flow, to form the LSTM layer. LSTM is given in Eqs. (<xref rid="j_infor561_eq_005">5</xref>)–(<xref rid="j_infor561_eq_009">9</xref>): <disp-formula-group id="j_infor561_dg_002">
<disp-formula id="j_infor561_eq_005">
<label>(5)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true" columnalign="right left" columnspacing="0pt">
<mml:mtr>
<mml:mtd class="align-odd"/>
<mml:mtd class="align-even">
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">W</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">f</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">U</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">f</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">b</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">f</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[\begin{aligned}{}& {f_{t}}={\sigma _{g}}({W_{f}}{x_{t,i}}+{U_{f}}{y_{t-1}}+{b_{f}}),\end{aligned}\]]]></tex-math></alternatives>
</disp-formula>
<disp-formula id="j_infor561_eq_006">
<label>(6)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true" columnalign="right left" columnspacing="0pt">
<mml:mtr>
<mml:mtd class="align-odd"/>
<mml:mtd class="align-even">
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">W</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">U</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">b</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[\begin{aligned}{}& {i_{t}}={\sigma _{g}}({W_{i}}{x_{t,i}}+{U_{i}}{y_{t-1}}+{b_{i}}),\end{aligned}\]]]></tex-math></alternatives>
</disp-formula>
<disp-formula id="j_infor561_eq_007">
<label>(7)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true" columnalign="right left" columnspacing="0pt">
<mml:mtr>
<mml:mtd class="align-odd"/>
<mml:mtd class="align-even">
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">o</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">W</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">o</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">U</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">o</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">b</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">o</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[\begin{aligned}{}& {o_{t}}={\sigma _{g}}({W_{o}}{x_{t,i}}+{U_{o}}{y_{t-1}}+{b_{o}}),\end{aligned}\]]]></tex-math></alternatives>
</disp-formula>
<disp-formula id="j_infor561_eq_008">
<label>(8)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true" columnalign="right left" columnspacing="0pt">
<mml:mtr>
<mml:mtd class="align-odd"/>
<mml:mtd class="align-even">
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msup>
<mml:mrow/>
<mml:mrow>
<mml:mo>∗</mml:mo>
</mml:mrow>
</mml:msup>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msup>
<mml:mrow/>
<mml:mrow>
<mml:mo>∗</mml:mo>
</mml:mrow>
</mml:msup>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">W</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">U</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">b</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[\begin{aligned}{}& {c_{t}}={f_{t}}{^{\ast }}{c_{t-1}}+{i_{t}}{^{\ast }}{\sigma _{y}}({W_{c}}{x_{t,i}}+{U_{c}}{y_{t-1}}+{b_{c}}),\end{aligned}\]]]></tex-math></alternatives>
</disp-formula>
<disp-formula id="j_infor561_eq_009">
<label>(9)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true" columnalign="right left" columnspacing="0pt">
<mml:mtr>
<mml:mtd class="align-odd"/>
<mml:mtd class="align-even">
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">o</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msup>
<mml:mrow/>
<mml:mrow>
<mml:mo>∗</mml:mo>
</mml:mrow>
</mml:msup>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[\begin{aligned}{}& {y_{t}}={o_{t}}{^{\ast }}{\sigma _{y}}({c_{t}}),\end{aligned}\]]]></tex-math></alternatives>
</disp-formula>
</disp-formula-group> where <inline-formula id="j_infor561_ineq_046"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${x_{t,i}}$]]></tex-math></alternatives></inline-formula> is the input vector to the LSTM unit. <inline-formula id="j_infor561_ineq_047"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${f_{t}}$]]></tex-math></alternatives></inline-formula>, <inline-formula id="j_infor561_ineq_048"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${i_{t}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_infor561_ineq_049"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">o</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${o_{t}}$]]></tex-math></alternatives></inline-formula> are the forget, input and output gate’s activation vectors respectively. <inline-formula id="j_infor561_ineq_050"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${y_{t}}$]]></tex-math></alternatives></inline-formula> is the output vector of the LSTM unit, <inline-formula id="j_infor561_ineq_051"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${c_{t}}$]]></tex-math></alternatives></inline-formula> is the cell state vector, <inline-formula id="j_infor561_ineq_052"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">g</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\sigma _{g}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_infor561_ineq_053"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\sigma _{y}}$]]></tex-math></alternatives></inline-formula> are sigmoid and hyperbolic tangent functions respectively. <sup>∗</sup> is the element-wise (Hadamard) product, <italic>W</italic> and <italic>U</italic> are weight matrices and <italic>b</italic> are the bias vectors (Bao <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_006">2017</xref>; Sezer <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_036">2020</xref>).</p>
</sec>
<sec id="j_infor561_s_008">
<label>3.2.3</label>
<title>Convolutional Neural Networks</title>
<p>Convolutional neural networks (CNNs) have different layers in their architecture: convolutional, max-pooling, dropout and fully connected FFNN layer. The convolutional layer consists of the convolution (filtering) operation, which is shown in Eq. (<xref rid="j_infor561_eq_010">10</xref>): 
<disp-formula id="j_infor561_eq_010">
<label>(10)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd>
<mml:mi mathvariant="italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>∗</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mi mathvariant="italic">w</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:mo largeop="true" movablelimits="false">∑</mml:mo></mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
<mml:mo>=</mml:mo>
<mml:mo>−</mml:mo>
<mml:mi>∞</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>∞</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:mi mathvariant="italic">x</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">a</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mi mathvariant="italic">w</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo>−</mml:mo>
<mml:mi mathvariant="italic">a</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ s(t)=({x^{\ast }}w)(t)={\sum \limits_{a=-\infty }^{\infty }}x(a)w(t-a),\]]]></tex-math></alternatives>
</disp-formula> 
where <italic>t</italic> is time, <italic>s</italic> is feature map, <italic>w</italic> is kernel, <italic>x</italic> is input, and <italic>a</italic> is variable. In addition, the convolution operation is implemented on two-dimensional images given in Eq. (<xref rid="j_infor561_eq_011">11</xref>): 
<disp-formula id="j_infor561_eq_011">
<label>(11)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd>
<mml:mi mathvariant="italic">S</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">j</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="italic">I</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>∗</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mi mathvariant="italic">K</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">j</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:mo largeop="true" movablelimits="false">∑</mml:mo></mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">m</mml:mi>
</mml:mrow>
</mml:munder>
<mml:munder>
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:mo largeop="true" movablelimits="false">∑</mml:mo></mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">n</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mi mathvariant="italic">I</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">m</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">n</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mi mathvariant="italic">K</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo>−</mml:mo>
<mml:mi mathvariant="italic">m</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">j</mml:mi>
<mml:mo>−</mml:mo>
<mml:mi mathvariant="italic">n</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ S(i,j)=({I^{\ast }}K)(i,j)=\sum \limits_{m}\sum \limits_{n}I(m,n)K(i-m,j-n),\]]]></tex-math></alternatives>
</disp-formula> 
where <italic>I</italic> is input image, <italic>K</italic> is kernel, <italic>m</italic> and <italic>n</italic> are dimensions of images, <italic>i</italic> and <italic>j</italic> are variables. Consecutive convolutional and max-pooling layers are also a part of the deep network architecture. CNN also includes the FFNN architecture given in Eq. (<xref rid="j_infor561_eq_012">12</xref>): 
<disp-formula id="j_infor561_eq_012">
<label>(12)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">z</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:mo largeop="true" movablelimits="false">∑</mml:mo></mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:munder>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">W</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">b</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ {z_{i}}=\sum \limits_{j}{W_{i,j}}{x_{j}}+{b_{i}},\]]]></tex-math></alternatives>
</disp-formula> 
where <italic>W</italic> are the parameters, <italic>x</italic> is input vector, <italic>b</italic> is bias vector, <italic>z</italic> is the output from the neurons which are put forward through the softmax activation function for the calculation of the output (<italic>y</italic>) in the output layer, which is shown in Eqs. (<xref rid="j_infor561_eq_013">13</xref>) and (<xref rid="j_infor561_eq_014">14</xref>) (Sezer <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_036">2020</xref>): <disp-formula-group id="j_infor561_dg_003">
<disp-formula id="j_infor561_eq_013">
<label>(13)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true" columnalign="right left" columnspacing="0pt">
<mml:mtr>
<mml:mtd class="align-odd"/>
<mml:mtd class="align-even">
<mml:mi mathvariant="italic">y</mml:mi>
<mml:mo>=</mml:mo>
<mml:mi mathvariant="normal">softmax</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">z</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[\begin{aligned}{}& y=\mathrm{softmax}({z_{i}}),\end{aligned}\]]]></tex-math></alternatives>
</disp-formula>
<disp-formula id="j_infor561_eq_014">
<label>(14)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true" columnalign="right left" columnspacing="0pt">
<mml:mtr>
<mml:mtd class="align-odd"/>
<mml:mtd class="align-even">
<mml:mi mathvariant="normal">softmax</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">z</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo><mml:mstyle displaystyle="true">
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mtext>e</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">z</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mo largeop="false" movablelimits="false">∑</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msup>
<mml:mrow>
<mml:mtext>e</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">z</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[\begin{aligned}{}& \mathrm{softmax}({z_{i}})=\frac{{\text{e}^{{z_{i}}}}}{{\textstyle\sum _{j}}{\text{e}^{{z_{j}}}}}.\end{aligned}\]]]></tex-math></alternatives>
</disp-formula>
</disp-formula-group></p>
</sec>
</sec>
<sec id="j_infor561_s_009">
<label>3.3</label>
<title>Model Evaluation</title>
<p>Given that the observed problem of predicting the next-day returns is a regression problem, the performance of the model was evaluated using mean squared error (MSE) which is calculated using the formula: 
<disp-formula id="j_infor561_eq_015">
<label>(15)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd>
<mml:mtext mathvariant="italic">MSE</mml:mtext>
<mml:mo>=</mml:mo><mml:mstyle displaystyle="true">
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">T</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:mo largeop="true" movablelimits="false">∑</mml:mo></mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">T</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:msup>
<mml:mrow>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>−</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">ˆ</mml:mo></mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mo mathvariant="normal">,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ \textit{MSE}=\frac{1}{T}{\sum \limits_{t=1}^{T}}{({y_{t}}-{\hat{y}_{t}})^{2}},\]]]></tex-math></alternatives>
</disp-formula> 
where <inline-formula id="j_infor561_ineq_054"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${y_{t}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_infor561_ineq_055"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">ˆ</mml:mo></mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\hat{y}_{t}}$]]></tex-math></alternatives></inline-formula> are observed and predicted values of returns, and <italic>T</italic> is total number of observations. Furthermore, considering that when predicting the next-day return, apart from the value itself, it is also important to accurately predict the sign of the next-day return, it was decided to analyse the obtained models from that perspective as well. Therefore, we converted the dependent variables from continuous to discrete, i.e. binary values, using the following rule: 
<disp-formula id="j_infor561_eq_016">
<label>(16)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">b</mml:mi>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mi mathvariant="italic">n</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mfenced separators="" open="{" close="">
<mml:mrow>
<mml:mtable columnspacing="4.0pt" equalrows="false" columnlines="none" equalcolumns="false" columnalign="left left">
<mml:mtr>
<mml:mtd class="array">
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mspace width="1em"/>
</mml:mtd>
<mml:mtd class="array">
<mml:mtext>if</mml:mtext>
<mml:mspace width="2.5pt"/>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>⩾</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd class="array">
<mml:mn>0</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mspace width="1em"/>
</mml:mtd>
<mml:mtd class="array">
<mml:mtext>if</mml:mtext>
<mml:mspace width="2.5pt"/>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">&lt;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ {y_{bin,t}}=\left\{\begin{array}{l@{\hskip4.0pt}l}1,\hspace{1em}& \text{if}\hspace{2.5pt}{y_{t}}\geqslant 0,\\ {} 0,\hspace{1em}& \text{if}\hspace{2.5pt}{y_{t}}\lt 0.\end{array}\right.\]]]></tex-math></alternatives>
</disp-formula> 
The above conversion was performed on both, the observed and predicted values, of the dependent variable, as well as for train and test set. Accuracy (ACC) was used as a metric to quantify the ability of models to predict the direction of price movements. It is the ratio of number of correct predictions (i.e. true positive-<inline-formula id="j_infor561_ineq_056"><alternatives><mml:math>
<mml:mtext mathvariant="italic">TP</mml:mtext></mml:math><tex-math><![CDATA[$\textit{TP}$]]></tex-math></alternatives></inline-formula> and true negative-<inline-formula id="j_infor561_ineq_057"><alternatives><mml:math>
<mml:mtext mathvariant="italic">TN</mml:mtext></mml:math><tex-math><![CDATA[$\textit{TN}$]]></tex-math></alternatives></inline-formula>) to the total number of input samples (<italic>T</italic>). 
<disp-formula id="j_infor561_eq_017">
<label>(17)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd>
<mml:mtext mathvariant="italic">ACC</mml:mtext>
<mml:mo>=</mml:mo><mml:mstyle displaystyle="true">
<mml:mfrac>
<mml:mrow>
<mml:mtext mathvariant="italic">TP</mml:mtext>
<mml:mo>+</mml:mo>
<mml:mtext mathvariant="italic">TN</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">T</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ \textit{ACC}=\frac{\textit{TP}+\textit{TN}}{T}.\]]]></tex-math></alternatives>
</disp-formula>
</p>
<p>In addition, the Diebold-Mariano test (Diebold and Mariano, <xref ref-type="bibr" rid="j_infor561_ref_010">1995</xref>) is used to test the equality of predictive ability between the two models, i.e. to test whether there is a statistically significant difference in the forecasting performances of the proposed models. It enables finding the optimal NN that has the highest forecasting performances.</p>
</sec>
</sec>
<sec id="j_infor561_s_010">
<label>4</label>
<title>Experimental Results</title>
<sec id="j_infor561_s_011">
<label>4.1</label>
<title>NNs Configuration</title>
<p>Three different NN architectures are used to build models: FFNN, LSTM and CNN. FFNNs consist of input, hidden, and output neurons. Inputs and outputs are theoretically driven and predefined by the researcher, while hidden neurons are one of the hyperparameters that have to be fine-tuned. Based on Patterson (<xref ref-type="bibr" rid="j_infor561_ref_032">1998</xref>), Moshiri and Cameron (<xref ref-type="bibr" rid="j_infor561_ref_029">2000</xref>) and Hwarng (<xref ref-type="bibr" rid="j_infor561_ref_018">2001</xref>), the following five Working rules for determining the number of hidden neurons are proposed: 
<list>
<list-item id="j_infor561_li_006">
<label>•</label>
<p>Patterson (<xref ref-type="bibr" rid="j_infor561_ref_032">1998</xref>): <inline-formula id="j_infor561_ineq_058"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo><mml:mstyle displaystyle="false">
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mn>10</mml:mn>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">p</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mfrac>
</mml:mstyle></mml:math><tex-math><![CDATA[${q_{1}}=\frac{{T_{i}}}{10(p+1)}$]]></tex-math></alternatives></inline-formula>;</p>
</list-item>
<list-item id="j_infor561_li_007">
<label>•</label>
<p>Moshiri and Cameron (<xref ref-type="bibr" rid="j_infor561_ref_029">2000</xref>): <inline-formula id="j_infor561_ineq_059"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo><mml:mstyle displaystyle="false">
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mn>5</mml:mn>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">p</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mfrac>
</mml:mstyle></mml:math><tex-math><![CDATA[${q_{2}}=\frac{{T_{i}}}{5(p+1)}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_infor561_ineq_060"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:mi mathvariant="italic">p</mml:mi>
</mml:mrow>
</mml:msqrt></mml:math><tex-math><![CDATA[${q_{3}}=\sqrt{p}$]]></tex-math></alternatives></inline-formula>;</p>
</list-item>
<list-item id="j_infor561_li_008">
<label>•</label>
<p>Hwarng (<xref ref-type="bibr" rid="j_infor561_ref_018">2001</xref>): <inline-formula id="j_infor561_ineq_061"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>4</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo><mml:mstyle displaystyle="false">
<mml:mfrac>
<mml:mrow>
<mml:mi mathvariant="italic">p</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
</mml:mstyle></mml:math><tex-math><![CDATA[${q_{4}}=\frac{p}{2}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_infor561_ineq_062"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo><mml:mstyle displaystyle="false">
<mml:mfrac>
<mml:mrow>
<mml:mn>3</mml:mn>
<mml:mi mathvariant="italic">p</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
</mml:mstyle></mml:math><tex-math><![CDATA[${q_{5}}=\frac{3p}{2}$]]></tex-math></alternatives></inline-formula>,</p>
</list-item>
</list> 
where <inline-formula id="j_infor561_ineq_063"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${T_{i}}$]]></tex-math></alternatives></inline-formula> stands for the number of observations in the train set, <italic>p</italic> for the number of independent variables, and there is only one dependent variable. Considering that this research is conducted on four different training sets, the value of <inline-formula id="j_infor561_ineq_064"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${T_{i}}$]]></tex-math></alternatives></inline-formula> varies for each set, whereas there are always <inline-formula id="j_infor561_ineq_065"><alternatives><mml:math>
<mml:mi mathvariant="italic">p</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>14</mml:mn></mml:math><tex-math><![CDATA[$p=14$]]></tex-math></alternatives></inline-formula> independent variables. Table <xref rid="j_infor561_tab_004">4</xref> shows the observed number of neurons calculated using each of the five above mentioned formulas and for each of the four observed subsets. The set of neurons is given in Table <xref rid="j_infor561_tab_005">5</xref>. Table <xref rid="j_infor561_tab_006">6</xref> presents the main hyperparameters of NNs configurations.</p>
<table-wrap id="j_infor561_tab_004">
<label>Table 4</label>
<caption>
<p>Number of neurons with different formulas for different subsets.</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Subset 2</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Subset 3</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Subset 4</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Subset 5</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor561_ineq_066"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${q_{1}}$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left">4.73</td>
<td style="vertical-align: top; text-align: left">7.83</td>
<td style="vertical-align: top; text-align: left">10.18</td>
<td style="vertical-align: top; text-align: left">12.62</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor561_ineq_067"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${q_{2}}$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left">9.47</td>
<td style="vertical-align: top; text-align: left">15.67</td>
<td style="vertical-align: top; text-align: left">20.36</td>
<td style="vertical-align: top; text-align: left">25.24</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor561_ineq_068"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${q_{3}}$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left">3.74</td>
<td style="vertical-align: top; text-align: left">3.74</td>
<td style="vertical-align: top; text-align: left">3.74</td>
<td style="vertical-align: top; text-align: left">3.74</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor561_ineq_069"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>4</mml:mn>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${q_{4}}$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left">7</td>
<td style="vertical-align: top; text-align: left">7</td>
<td style="vertical-align: top; text-align: left">7</td>
<td style="vertical-align: top; text-align: left">7</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><inline-formula id="j_infor561_ineq_070"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${q_{5}}$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">10.5</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">10.5</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">10.5</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">10.5</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="j_infor561_tab_005">
<label>Table 5</label>
<caption>
<p>Set of observed neurons for different subsets.</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Subset 2</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Subset 3</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Subset 4</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Subset 5</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Set of neurons</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><inline-formula id="j_infor561_ineq_071"><alternatives><mml:math>
<mml:mn>3</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>4</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>7</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>9</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>10</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>11</mml:mn></mml:math><tex-math><![CDATA[$3,4,7,9,10,11$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><inline-formula id="j_infor561_ineq_072"><alternatives><mml:math>
<mml:mn>3</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>4</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>7</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>10</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>11</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>15</mml:mn></mml:math><tex-math><![CDATA[$3,4,7,10,11,15$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><inline-formula id="j_infor561_ineq_073"><alternatives><mml:math>
<mml:mn>3</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>4</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>7</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>10</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>11</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>20</mml:mn></mml:math><tex-math><![CDATA[$3,4,7,10,11,20$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><inline-formula id="j_infor561_ineq_074"><alternatives><mml:math>
<mml:mn>3</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>4</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>7</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>10</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>12</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>25</mml:mn></mml:math><tex-math><![CDATA[$3,4,7,10,12,25$]]></tex-math></alternatives></inline-formula></td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="j_infor561_tab_006">
<label>Table 6</label>
<caption>
<p>NNs configurations.</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">FFNN</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">LSTM</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">CNN</td>
</tr>
</thead>
<tbody>
<tr>
<td colspan="3" style="vertical-align: top; text-align: center"><inline-formula id="j_infor561_ineq_075"><alternatives><mml:math>
<mml:mi mathvariant="italic">p</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>14</mml:mn></mml:math><tex-math><![CDATA[$p=14$]]></tex-math></alternatives></inline-formula></td>
</tr>
<tr>
<td colspan="3" style="vertical-align: top; text-align: center">learning algorithm is stochastic gradient descent</td>
</tr>
<tr>
<td colspan="3" style="vertical-align: top; text-align: center">learning rates are 0.01, 0.001, 0.0001</td>
</tr>
<tr>
<td colspan="3" style="vertical-align: top; text-align: center">loss function is mean square error</td>
</tr>
<tr>
<td colspan="3" style="vertical-align: top; text-align: center">batch sizes are 2, 32, set size</td>
</tr>
<tr>
<td colspan="3" style="vertical-align: top; text-align: center">number of epochs is 500</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">
<list>
<list-item id="j_infor561_li_009">
<label>•</label>
<p>one hidden layer,</p>
</list-item>
<list-item id="j_infor561_li_010">
<label>•</label>
<p>tangent hyperbolic activation functions,</p>
</list-item>
<list-item id="j_infor561_li_011">
<label>•</label>
<p>set of neurons in Table <xref rid="j_infor561_tab_005">5</xref>.</p>
</list-item>
</list>
</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">
<list>
<list-item id="j_infor561_li_012">
<label>•</label>
<p>one LSTM layer,</p>
</list-item>
<list-item id="j_infor561_li_013">
<label>•</label>
<p>one dense layer,</p>
</list-item>
<list-item id="j_infor561_li_014">
<label>•</label>
<p>neurons in Table <xref rid="j_infor561_tab_005">5</xref> multiplied by 10.</p>
</list-item>
</list>
</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">
<list>
<list-item id="j_infor561_li_015">
<label>•</label>
<p>1-dimensional convolutional layer,</p>
</list-item>
<list-item id="j_infor561_li_016">
<label>•</label>
<p>MaxPooling1D layer for the max-pooling layer,</p>
</list-item>
<list-item id="j_infor561_li_017">
<label>•</label>
<p>tangent hyperbolic activation function,</p>
</list-item>
<list-item id="j_infor561_li_018">
<label>•</label>
<p>32 filters,</p>
</list-item>
<list-item id="j_infor561_li_019">
<label>•</label>
<p>pool size with size 2,</p>
</list-item>
<list-item id="j_infor561_li_020">
<label>•</label>
<p>kernel sizes are 210, 330, 450 and 540 for the observed four subsets,<sup>1</sup></p>
</list-item>
<list-item id="j_infor561_li_021">
<label>•</label>
<p>set of neurons in Table <xref rid="j_infor561_tab_005">5</xref>.</p>
</list-item>
</list>
</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><sup>1</sup>Kernel size are calculated with formula <inline-formula id="j_infor561_ineq_076"><alternatives><mml:math><mml:mstyle displaystyle="false">
<mml:mfrac>
<mml:mrow>
<mml:mi mathvariant="normal">set</mml:mi>
<mml:mi mathvariant="normal">size</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>100</mml:mn>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
<mml:mo>·</mml:mo>
<mml:mn>30</mml:mn></mml:math><tex-math><![CDATA[$\frac{\mathrm{set}\mathrm{size}}{100}\cdot 30$]]></tex-math></alternatives></inline-formula> (Cavalli and Amoretti, <xref ref-type="bibr" rid="j_infor561_ref_007">2021</xref>).</p>
</table-wrap-foot>
</table-wrap>
<p>The proposed NNs were trained on each of the four subsets and evaluated on the associated test sets for various parameter values, while the fixed seed was used for the reproducibility of the results in one experimental run for each NN. The research identified 729 unique models. In addition, for each model, for the true and predicted values of the dependent variable in the test set, equivalent binary values (signifying the direction of the Bitcoin price) were generated and used to measure the accuracy of each model. Finally, models were compared based on both MSE and accuracy (ACC).</p>
<p>Because the observed datasets differ in terms of the measure of attractiveness, models containing Google Trends and models containing Tweets are compared independently.</p>
</sec>
<sec id="j_infor561_s_012">
<label>4.2</label>
<title>Comparison of Models for Google Trends Dataset</title>
<p>Models on four partitions of a dataset that included Google Trends as an attractiveness measure using three NNs architectures with different parameters set up are constructed. For each NN structure and each subset, the best model with the lowest MSE was aimed for. The best models obtained together with tuned hyperparameters are shown in Table <xref rid="j_infor561_tab_007">7</xref>.</p>
<p>The first section of the table shows the values of variable hyperparameters, while the second section shows the values of various performance measures for each model, along with Diebold-Mariano test of predictive performances for the models in pairs. Models are ranked according to the MSE value on the test set. The first fifty percent of the ranks are allocated to the models developed with Subsets 3 and 4. These Subsets are characterized by an upward trend in Bitcoin prices, which is well predicted by all NN models. In Subsets 3 and 4, CNN has the lowest MSE, which is significantly lower than both FFNN and LSTM.</p>
<p>There is no statistically significant difference between LSTM and FFNN based on the DM test. The models constructed on Subset 2 ranked the worst, followed by models built on Subset 5. In these two Subsets, i.e. the bearish market, FFNN has the best predictive performance in terms of MSE. In both Subsets, the DM test shows that there is no significant difference between FFNN and LSTM models, while they both outperform CNN. Clearly, NNs could not capture the slump in Bitcoin prices at the end of 2017 and in the beginning of the year 2021. The best two models, when comparing MSE on the test set, are CNNs for the Subsets 3 and 4. Cavalli and Amoretti (<xref ref-type="bibr" rid="j_infor561_ref_007">2021</xref>), Zhang <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor561_ref_046">2021</xref>) and Šestanović and Kalinić Milićević (<xref ref-type="bibr" rid="j_infor561_ref_042">2023</xref>) conclude similarly.</p>
<table-wrap id="j_infor561_tab_007">
<label>Table 7</label>
<caption>
<p>The optimal models for each subset (S2 to S5) for Google Trends along with tuned hyperparameters, performance measures and Diebold-Mariano test.</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Subsets</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Nmb. of neurons</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Learning rate</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Batch size</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Test MSE</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Test ACC</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Test MSE rank</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Test ACC rank</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">FFNN</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">LSTM</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: left">S2-FFNN</td>
<td style="vertical-align: top; text-align: left">7</td>
<td style="vertical-align: top; text-align: left">0.001</td>
<td style="vertical-align: top; text-align: left">32</td>
<td style="vertical-align: top; text-align: left">0.00506</td>
<td style="vertical-align: top; text-align: left">48.57%</td>
<td style="vertical-align: top; text-align: left">10</td>
<td style="vertical-align: top; text-align: left">11</td>
<td style="vertical-align: top; text-align: left">/</td>
<td style="vertical-align: top; text-align: left">/</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">S2-LSTM</td>
<td style="vertical-align: top; text-align: left">40</td>
<td style="vertical-align: top; text-align: left">0.0001</td>
<td style="vertical-align: top; text-align: left">32</td>
<td style="vertical-align: top; text-align: left">0.00511</td>
<td style="vertical-align: top; text-align: left">54.29%</td>
<td style="vertical-align: top; text-align: left">11</td>
<td style="vertical-align: top; text-align: left">6</td>
<td style="vertical-align: top; text-align: left">1.2736</td>
<td style="vertical-align: top; text-align: left">/</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">S2-CNN</td>
<td style="vertical-align: top; text-align: left">9</td>
<td style="vertical-align: top; text-align: left">0.001</td>
<td style="vertical-align: top; text-align: left">710</td>
<td style="vertical-align: top; text-align: left">0.00535</td>
<td style="vertical-align: top; text-align: left">61.54%</td>
<td style="vertical-align: top; text-align: left">12</td>
<td style="vertical-align: top; text-align: left">5</td>
<td style="vertical-align: top; text-align: left">2.6778**</td>
<td style="vertical-align: top; text-align: left">2.2926**</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">0.00517</td>
<td style="vertical-align: top; text-align: left">0.54799</td>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">S3-FFNN</td>
<td style="vertical-align: top; text-align: left">7</td>
<td style="vertical-align: top; text-align: left">0.0001</td>
<td style="vertical-align: top; text-align: left">32</td>
<td style="vertical-align: top; text-align: left">0.00157</td>
<td style="vertical-align: top; text-align: left">63.79%</td>
<td style="vertical-align: top; text-align: left">4</td>
<td style="vertical-align: top; text-align: left">3</td>
<td style="vertical-align: top; text-align: left">/</td>
<td style="vertical-align: top; text-align: left">/</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">S3-LSTM</td>
<td style="vertical-align: top; text-align: left">70</td>
<td style="vertical-align: top; text-align: left">0.01</td>
<td style="vertical-align: top; text-align: left">2</td>
<td style="vertical-align: top; text-align: left">0.00155</td>
<td style="vertical-align: top; text-align: left">65.52%</td>
<td style="vertical-align: top; text-align: left">3</td>
<td style="vertical-align: top; text-align: left">1</td>
<td style="vertical-align: top; text-align: left">0.8786</td>
<td style="vertical-align: top; text-align: left">/</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">S3-CNN</td>
<td style="vertical-align: top; text-align: left">10</td>
<td style="vertical-align: top; text-align: left">0.01</td>
<td style="vertical-align: top; text-align: left">1175</td>
<td style="vertical-align: top; text-align: left">0.00131</td>
<td style="vertical-align: top; text-align: left">65.31%</td>
<td style="vertical-align: top; text-align: left">1</td>
<td style="vertical-align: top; text-align: left">2</td>
<td style="vertical-align: top; text-align: left">2.3097**</td>
<td style="vertical-align: top; text-align: left">2.2330**</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">0.00148</td>
<td style="vertical-align: top; text-align: left">0.64872</td>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">S4-FFNN</td>
<td style="vertical-align: top; text-align: left">3</td>
<td style="vertical-align: top; text-align: left">0.0001</td>
<td style="vertical-align: top; text-align: left">32</td>
<td style="vertical-align: top; text-align: left">0.00181</td>
<td style="vertical-align: top; text-align: left">63.16%</td>
<td style="vertical-align: top; text-align: left">5</td>
<td style="vertical-align: top; text-align: left">4</td>
<td style="vertical-align: top; text-align: left">/</td>
<td style="vertical-align: top; text-align: left">/</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">S4-LSTM</td>
<td style="vertical-align: top; text-align: left">40</td>
<td style="vertical-align: top; text-align: left">0.01</td>
<td style="vertical-align: top; text-align: left">1527</td>
<td style="vertical-align: top; text-align: left">0.00201</td>
<td style="vertical-align: top; text-align: left">48.68%</td>
<td style="vertical-align: top; text-align: left">6</td>
<td style="vertical-align: top; text-align: left">10</td>
<td style="vertical-align: top; text-align: left">2.1783**</td>
<td style="vertical-align: top; text-align: left">/</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">S4-CNN</td>
<td style="vertical-align: top; text-align: left">3</td>
<td style="vertical-align: top; text-align: left">0.001</td>
<td style="vertical-align: top; text-align: left">32</td>
<td style="vertical-align: top; text-align: left">0.00154</td>
<td style="vertical-align: top; text-align: left">49.25%</td>
<td style="vertical-align: top; text-align: left">2</td>
<td style="vertical-align: top; text-align: left">9</td>
<td style="vertical-align: top; text-align: left">2.4464**</td>
<td style="vertical-align: top; text-align: left">2.2327**</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">0.00179</td>
<td style="vertical-align: top; text-align: left">0.53699</td>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">S5-FFNN</td>
<td style="vertical-align: top; text-align: left">25</td>
<td style="vertical-align: top; text-align: left">0.001</td>
<td style="vertical-align: top; text-align: left">2</td>
<td style="vertical-align: top; text-align: left">0.00211</td>
<td style="vertical-align: top; text-align: left">46.81%</td>
<td style="vertical-align: top; text-align: left">7</td>
<td style="vertical-align: top; text-align: left">12</td>
<td style="vertical-align: top; text-align: left">/</td>
<td style="vertical-align: top; text-align: left">/</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">S5-LSTM</td>
<td style="vertical-align: top; text-align: left">250</td>
<td style="vertical-align: top; text-align: left">0.001</td>
<td style="vertical-align: top; text-align: left">2</td>
<td style="vertical-align: top; text-align: left">0.00213</td>
<td style="vertical-align: top; text-align: left">51.06%</td>
<td style="vertical-align: top; text-align: left">8</td>
<td style="vertical-align: top; text-align: left">8</td>
<td style="vertical-align: top; text-align: left">0.7360</td>
<td style="vertical-align: top; text-align: left">/</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">S5-CNN</td>
<td style="vertical-align: top; text-align: left">12</td>
<td style="vertical-align: top; text-align: left">0.001</td>
<td style="vertical-align: top; text-align: left">32</td>
<td style="vertical-align: top; text-align: left">0.00227</td>
<td style="vertical-align: top; text-align: left">51.76%</td>
<td style="vertical-align: top; text-align: left">9</td>
<td style="vertical-align: top; text-align: left">7</td>
<td style="vertical-align: top; text-align: left">4.1886***</td>
<td style="vertical-align: top; text-align: left">4.1958***</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.00217</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.49879</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"/>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>Note: *, ** and *** indicate significance at the 0.1, 0.05 and 0.01 levels respectively.</p>
<p>Source: The authors’ calculations in Python and R.</p>
</table-wrap-foot>
</table-wrap>
<p>If the accuracy (ACC) of models is analysed (Table <xref rid="j_infor561_tab_007">7</xref>), the top three models come from the Subset 3, characterized by a slow and steady increase in Bitcoin prices. Excellent results come from CNN in the Subset 2, which means that CNN was able to capture the direction of Bitcoin prices in a downturn period at the end of 2017 and the beginning of 2018. In similar conditions in Subset 5, CNN performed slightly better than other models. Additionally, FFNN was able to capture the direction of Bitcoin prices movement in the Subset 4, characterized by a significant slump in Bitcoin prices at the beginning of the Covid-19 crisis followed by a sharp increase in Bitcoin prices. FFNNs are proved to have good generalization power in Spilak (<xref ref-type="bibr" rid="j_infor561_ref_038">2018</xref>), while the same is confirmed in this paper for the extremely volatile period of Bitcoin returns. The optimal model when considering the accuracy measure is the LSTM in Subset 3 reaching the accuracy of 65.52%. This confirms the finding of several researches about superiority of LSTM models (Lahmiri and Bekiros, <xref ref-type="bibr" rid="j_infor561_ref_024">2019</xref>; Ji <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_022">2019</xref>; Spilak, <xref ref-type="bibr" rid="j_infor561_ref_038">2018</xref>).</p>
<p>The most commonly used learning rate across all models is 0.001. Using a learning rate of 0.001 allows the convergence of the learning algorithm (Šestanović and Arnerić, <xref ref-type="bibr" rid="j_infor561_ref_041">2020</xref>). In addition, the LSTM model used the largest number of neurons and the smallest batch size, whereas the CNN model utilized the second largest number of neurons and a batch size of 32. All three models conducted on the largest subset are the ones that used the largest number of neurons. However, the predictive performances of those models are among the worse. It is confirmed that using a lower number of hidden neurons leads to optimal models with good predictive performances.</p>
</sec>
<sec id="j_infor561_s_013">
<label>4.3</label>
<title>Comparison of Models for Tweets Dataset</title>
<table-wrap id="j_infor561_tab_008">
<label>Table 8</label>
<caption>
<p>The optimal models for each subset (S2 to S5) for Tweets along with tuned hyperparameters, performance measures and Diebold-Mariano test.</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Subsets</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Nmb. of neurons</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Learning rate</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Batch size</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Test MSE</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Test ACC</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Test MSE rank</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Test ACC rank</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">FFNN</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">LSTM</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: left">S2-FFNN</td>
<td style="vertical-align: top; text-align: left">3</td>
<td style="vertical-align: top; text-align: left">0.0001</td>
<td style="vertical-align: top; text-align: left">32</td>
<td style="vertical-align: top; text-align: left">0.00506</td>
<td style="vertical-align: top; text-align: left">57.14%</td>
<td style="vertical-align: top; text-align: left">10</td>
<td style="vertical-align: top; text-align: left">6</td>
<td style="vertical-align: top; text-align: left">/</td>
<td style="vertical-align: top; text-align: left">/</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">S2-LSTM</td>
<td style="vertical-align: top; text-align: left">100</td>
<td style="vertical-align: top; text-align: left">0.0001</td>
<td style="vertical-align: top; text-align: left">32</td>
<td style="vertical-align: top; text-align: left">0.00530</td>
<td style="vertical-align: top; text-align: left">54.29%</td>
<td style="vertical-align: top; text-align: left">12</td>
<td style="vertical-align: top; text-align: left">9</td>
<td style="vertical-align: top; text-align: left">0.3717</td>
<td style="vertical-align: top; text-align: left">/</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">S2-CNN</td>
<td style="vertical-align: top; text-align: left">11</td>
<td style="vertical-align: top; text-align: left">0.0001</td>
<td style="vertical-align: top; text-align: left">32</td>
<td style="vertical-align: top; text-align: left">0.00523</td>
<td style="vertical-align: top; text-align: left">61.54%</td>
<td style="vertical-align: top; text-align: left">11</td>
<td style="vertical-align: top; text-align: left">4</td>
<td style="vertical-align: top; text-align: left">1.2283</td>
<td style="vertical-align: top; text-align: left">1.2542</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">0.00520</td>
<td style="vertical-align: top; text-align: left">0.57656</td>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">S3-FFNN</td>
<td style="vertical-align: top; text-align: left">16</td>
<td style="vertical-align: top; text-align: left">0.001</td>
<td style="vertical-align: top; text-align: left">1175</td>
<td style="vertical-align: top; text-align: left">0.00153</td>
<td style="vertical-align: top; text-align: left">65.52%</td>
<td style="vertical-align: top; text-align: left">2</td>
<td style="vertical-align: top; text-align: left">1</td>
<td style="vertical-align: top; text-align: left">/</td>
<td style="vertical-align: top; text-align: left">/</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">S3-LSTM</td>
<td style="vertical-align: top; text-align: left">80</td>
<td style="vertical-align: top; text-align: left">0.01</td>
<td style="vertical-align: top; text-align: left">2</td>
<td style="vertical-align: top; text-align: left">0.00159</td>
<td style="vertical-align: top; text-align: left">65.52%</td>
<td style="vertical-align: top; text-align: left">3</td>
<td style="vertical-align: top; text-align: left">1</td>
<td style="vertical-align: top; text-align: left">0.4819</td>
<td style="vertical-align: top; text-align: left">/</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">S3-CNN</td>
<td style="vertical-align: top; text-align: left">8</td>
<td style="vertical-align: top; text-align: left">0.01</td>
<td style="vertical-align: top; text-align: left">1175</td>
<td style="vertical-align: top; text-align: left">0.00127</td>
<td style="vertical-align: top; text-align: left">63.27%</td>
<td style="vertical-align: top; text-align: left">1</td>
<td style="vertical-align: top; text-align: left">3</td>
<td style="vertical-align: top; text-align: left">2.1081**</td>
<td style="vertical-align: top; text-align: left">2.1710**</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">0.00146</td>
<td style="vertical-align: top; text-align: left">0.64767</td>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">S4-FFNN</td>
<td style="vertical-align: top; text-align: left">20</td>
<td style="vertical-align: top; text-align: left">0.01</td>
<td style="vertical-align: top; text-align: left">1527</td>
<td style="vertical-align: top; text-align: left">0.00178</td>
<td style="vertical-align: top; text-align: left">56.58%</td>
<td style="vertical-align: top; text-align: left">5</td>
<td style="vertical-align: top; text-align: left">7</td>
<td style="vertical-align: top; text-align: left">/</td>
<td style="vertical-align: top; text-align: left">/</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">S4-LSTM</td>
<td style="vertical-align: top; text-align: left">70</td>
<td style="vertical-align: top; text-align: left">0.001</td>
<td style="vertical-align: top; text-align: left">32</td>
<td style="vertical-align: top; text-align: left">0.00209</td>
<td style="vertical-align: top; text-align: left">48.68%</td>
<td style="vertical-align: top; text-align: left">7</td>
<td style="vertical-align: top; text-align: left">11</td>
<td style="vertical-align: top; text-align: left">0.4705</td>
<td style="vertical-align: top; text-align: left">/</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">S4-CNN</td>
<td style="vertical-align: top; text-align: left">10</td>
<td style="vertical-align: top; text-align: left">0.0001</td>
<td style="vertical-align: top; text-align: left">2</td>
<td style="vertical-align: top; text-align: left">0.00172</td>
<td style="vertical-align: top; text-align: left">40.30%</td>
<td style="vertical-align: top; text-align: left">4</td>
<td style="vertical-align: top; text-align: left">12</td>
<td style="vertical-align: top; text-align: left">2.9165***</td>
<td style="vertical-align: top; text-align: left">2.5437**</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">0.00187</td>
<td style="vertical-align: top; text-align: left">0.48521</td>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"/>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">S5-FFNN</td>
<td style="vertical-align: top; text-align: left">12</td>
<td style="vertical-align: top; text-align: left">0.01</td>
<td style="vertical-align: top; text-align: left">1893</td>
<td style="vertical-align: top; text-align: left">0.00208</td>
<td style="vertical-align: top; text-align: left">56.38%</td>
<td style="vertical-align: top; text-align: left">6</td>
<td style="vertical-align: top; text-align: left">8</td>
<td style="vertical-align: top; text-align: left">/</td>
<td style="vertical-align: top; text-align: left">/</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">S5-LSTM</td>
<td style="vertical-align: top; text-align: left">30</td>
<td style="vertical-align: top; text-align: left">0.001</td>
<td style="vertical-align: top; text-align: left">2</td>
<td style="vertical-align: top; text-align: left">0.00214</td>
<td style="vertical-align: top; text-align: left">50.00%</td>
<td style="vertical-align: top; text-align: left">8</td>
<td style="vertical-align: top; text-align: left">10</td>
<td style="vertical-align: top; text-align: left">1.5761</td>
<td style="vertical-align: top; text-align: left">/</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">S5-CNN</td>
<td style="vertical-align: top; text-align: left">10</td>
<td style="vertical-align: top; text-align: left">0.0001</td>
<td style="vertical-align: top; text-align: left">32</td>
<td style="vertical-align: top; text-align: left">0.00225</td>
<td style="vertical-align: top; text-align: left">58.82%</td>
<td style="vertical-align: top; text-align: left">9</td>
<td style="vertical-align: top; text-align: left">5</td>
<td style="vertical-align: top; text-align: left">4.2440***</td>
<td style="vertical-align: top; text-align: left">3.7034***</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.00216</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.55069</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"/>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>Note: *, ** and *** indicate significance at the 0.1, 0.05 and 0.01 levels respectively.</p>
<p>Source: The authors’ calculations in Python and R.</p>
</table-wrap-foot>
</table-wrap>
<p>Three NN architectures with different parameter settings are estimated on four subsets of a dataset that includes Tweets as an attractiveness measure. The best models with the lowest MSE for each NNs structure and subset are sought, and along with tuned hyperparameters, performance measures and Diebold-Mariano test for predictive performances are shown in Table <xref rid="j_infor561_tab_008">8</xref>.</p>
<p>Based on the lowest MSE in the test set, three lowest ranks are again assigned to models developed with Subset 2, i.e. in the downturn of cryptocurrency market. In this Subset, FFNN has the best predictive performances in terms of MSE. However, there is no significant difference between the NN models according to DM test. The best results are obtained by models associated with Subset 3, i.e. in bullish market conditions, where CNN has the lowest MSE which is significantly lower than both FFNN and LSTM according to DM test. There is no statistically significant difference between FFNN and LSTM. Other ranks are divided between models conducted on Subsets 4 and 5, with Subset 4 models slightly predominating. In Subset 4, i.e. upward market conditions, CNN has the best predictive performances and it is slightly better than both FFNN and LSTM, while there is no statistically significant difference between FFNN and LSTM. Finally, in Subset 5, i.e. the downturn period, FFNN has the lowest MSE. DM test shows that it is not significantly different from LSTM. However, they both outperform CNN.</p>
<p>From the standpoint of accuracy, nine out of twelve models reach an accuracy on the test set higher than 50%, i.e. they have good predictive power. The best among them are FFNN and LSTM for the Subset 3 reaching an accuracy of 65.52%. Comparable results are achieved in the same Subset with CNN. In contrast, during another bullish period covered by Subset 4, FFNN achieves the highest accuracy, whereas the other two models have the worst results overall. In Subsets 2 and 5, which represent periods of downturn, CNN outperforms the other models.</p>
<p>The most common learning rate is 0.0001 while in the previous section, it was 0.001 The most common batch size in both sections is 32. Neither of these models has the largest amount of available neurons in its configuration, which leads to the conclusion that using a lower number of hidden neurons leads to optimal models with good predictive performances.</p>
</sec>
<sec id="j_infor561_s_014">
<label>4.4</label>
<title>Comparative Analysis of Google Trends and Tweets</title>
<p>This section provides a comparative analysis of two attractiveness measures. The last rows of Tables <xref rid="j_infor561_tab_007">7</xref> and <xref rid="j_infor561_tab_008">8</xref> present the average values of the observed measures for models with all three NNs architecture. Comparing the average test performances of models run on these two datasets whose attractiveness measures differ, the dataset with Tweets as the attractiveness has dominance in the downturn periods, i.e. Subsets 2 and 5, as it predicted the direction of Bitcoin prices on average better than Google Trends. However, the highest accuracy reached is in Subset 3 and it is the same for both variables. Moreover, the average accuracy of models in Subset 3 is nearly identical for both attractiveness measures. Google Trends significantly outperforms Tweets only in Subset 4. Therefore, the dataset with Tweets as the attractiveness measure enabled the models to attain superior performance in terms of accuracy. In terms of MSE, according to DM test, there is no statistically significant difference between the two attractiveness measures. The results of DM test are given in Table <xref rid="j_infor561_tab_009">9</xref>.</p>
<table-wrap id="j_infor561_tab_009">
<label>Table 9</label>
<caption>
<p>Diebold-Mariano test for comparison of Google Trends and Tweets through subsets.</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Google Trends v. Tweets</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">FFNN</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">LSTM</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">CNN</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: left">S2</td>
<td style="vertical-align: top; text-align: left">0.7141</td>
<td style="vertical-align: top; text-align: left">1.3319</td>
<td style="vertical-align: top; text-align: left">1.1741</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">S3</td>
<td style="vertical-align: top; text-align: left">0.1654</td>
<td style="vertical-align: top; text-align: left">0.0195</td>
<td style="vertical-align: top; text-align: left">0.1654</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">S4</td>
<td style="vertical-align: top; text-align: left">0.8715</td>
<td style="vertical-align: top; text-align: left">1.5630</td>
<td style="vertical-align: top; text-align: left">0.1183</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">S5</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.2376</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">1.7107*</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">1.5155</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>Note: * indicates significance at the 0.1 level.</p>
<p>Source: The authors’ calculations in Python and R.</p>
</table-wrap-foot>
</table-wrap>
</sec>
</sec>
<sec id="j_infor561_s_015">
<label>5</label>
<title>Discussion</title>
<p>The models with the best forecasting performances on both observed performance measures are those in Subset 3, which is characterized by an increase in price but lower volatility. This confirms the findings of previous research, in which models tested during periods of stable Bitcoin price growth performed well. Subset 4 was characterized by a significant slump in Bitcoin prices due to the uncertainty of the Covid-19 crisis, followed by a more volatile price increase. Thus, on average, all NNs predicted the movement properly. However, predicting direction was more challenging for more complex NNs. Namely, FFNN outperformed other NNs in direction forecasting. This result comes as a surprise since it contradicts previous research findings that CNN outperforms other NN architectures (Cavalli and Amoretti, <xref ref-type="bibr" rid="j_infor561_ref_007">2021</xref>; Zhang <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_046">2021</xref>; Šestanović and Kalinić Milićević, <xref ref-type="bibr" rid="j_infor561_ref_042">2023</xref>). In Subsets 2 and 5, which cover the significant downturn period in Bitcoin prices, including extremely volatile periods, CNN managed to capture price fluctuations, while FFNN dominated on average in predicting Bitcoin returns. CNNs have the advantages inherent in their architecture, incorporating FFNN model after the convolutional layer. This enables a lower number of hidden neurons and higher predictive performances, while avoiding the overfitting problem in the process. CNNs are robust and need less training time compared to RNNs or FFNNs, and can reduce the complexity of the model (Madaeni <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor561_ref_028">2022</xref>). Based on accuracy, the model with LSTM architecture is superior to all others, confirming the findings in Ji <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor561_ref_022">2019</xref>) and Spilak (<xref ref-type="bibr" rid="j_infor561_ref_038">2018</xref>). CNN outperforms other models in downturn period, while FFNN outperforms the other models in bullish market conditions. This means that simpler NNs can be used for predictions in bullish market, while more complex NNs should be used for predictions in bearish market. The results of MSE produce diametrically opposed conclusions. Namely, MSE and accuracy do not always agree when it comes to identifying the optimal prediction model. Since MSE is composed of both bias and variance, if one estimator has lower MSE than another, it is not known whether this is due to lower bias or lower variance (i.e. higher precision). Therefore, further investigation is needed to determine whether it is more appropriate to observe regression models through these two perspectives or to simply use classification models to find the model which has better prediction of price direction.</p>
</sec>
<sec id="j_infor561_s_016">
<label>6</label>
<title>Conclusion</title>
<p>In this paper, the predictive performances of three commonly used NN models were compared using different performance measures on different subsets of datasets, differentiated by two attractiveness measures. These subsets entail different market conditions, i.e. bearish or bullish periods. Thus, this paper examined the ability of these machine learning models in all types of environments, including bullish, bearish, and stable periods, as well as periods characterized by high volatility. All NNs performed best in an environment of bullish market, where CNN stood out as the optimal NN model using MSE, while FFNN and LSTM emerged as optimal models in direction forecasting. However, based on accuracy, CNN outperformed other models in downturn periods, while FFNN outperformed other models in bullish market conditions. The results of MSE produced diametrically opposed conclusions. Moreover, based on accuracy, the dataset with Tweets as the attractiveness measure outperformed Google Trends, whereas based on MSE the results did not differ significantly. Finally, using a lower number of hidden neurons, as well as lower learning rate values and lower batch size values yielded optimal results. Future research direction includes forecasting the cryptocurrency returns using the sentiment-enriched data. Additionally, since cryptocurrency prices are increasingly reactive to macroeconomic shocks, it is proposed to examine which macroeconomic variables have the greatest influence on their movement. Finally, sophisticated neural network models can be used for the prediction of cryptocurrency prices, returns, direction and volatility in a comprehensive manner.</p>
</sec>
</body>
<back>
<ref-list id="j_infor561_reflist_001">
<title>References</title>
<ref id="j_infor561_ref_001">
<mixed-citation publication-type="journal"><string-name><surname>Abu Bakar</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Rosbi</surname>, <given-names>S.</given-names></string-name> (<year>2017</year>). <article-title>Autoregressive integrated moving average (ARIMA) model for forecasting cryptocurrency exchange rate in high volatility environment: a new insight of bitcoin transaction</article-title>. <source>International Journal of Advanced Engineering Research and Scinece</source>, <volume>4</volume>(<issue>11</issue>), <fpage>130</fpage>–<lpage>137</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.22161/ijaers.4.11.20" xlink:type="simple">https://doi.org/10.22161/ijaers.4.11.20</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_002">
<mixed-citation publication-type="journal"><string-name><surname>Aljinović</surname>, <given-names>Z.</given-names></string-name>, <string-name><surname>Marasović</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Šestanović</surname>, <given-names>T.</given-names></string-name> (<year>2021</year>). <article-title>Cryptocurrency portfolio selection—a multicriteria approach</article-title>. <source>Mathematics</source>, <volume>9</volume>(<issue>14</issue>), <fpage>1677</fpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.3390/math9141677" xlink:type="simple">https://doi.org/10.3390/math9141677</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_003">
<mixed-citation publication-type="journal"><string-name><surname>Aljinović</surname>, <given-names>Z.</given-names></string-name>, <string-name><surname>Šestanović</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Škrabić Perić</surname>, <given-names>B.</given-names></string-name> (<year>2022</year>). <article-title>A new evidence of the relationship between cryptocurrencies and other assets from the covid-19 crisis</article-title>. <source>Journal of Economics / Ekonomicky casopis</source>, <volume>70</volume>(<issue>7–8</issue>), <fpage>603</fpage>–<lpage>621</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.31577/ekoncas.2022.07-8.03" xlink:type="simple">https://doi.org/10.31577/ekoncas.2022.07-8.03</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_004">
<mixed-citation publication-type="other"><string-name><surname>Azari</surname>, <given-names>A.</given-names></string-name> (<year>2019</year>). Bitcoin Price Prediction: An ARIMA Approach. arXiv:<ext-link ext-link-type="uri" xlink:href="https://arxiv.org/abs/1904.05315">1904.05315</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_005">
<mixed-citation publication-type="journal"><string-name><surname>Bai</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Perron</surname>, <given-names>P.</given-names></string-name> (<year>1998</year>). <article-title>Estimating and testing linear models with multiple structural changes</article-title>. <source>Econometrica</source>, <volume>66</volume>, <fpage>47</fpage>–<lpage>78</lpage>. <comment><uri>http://www.jstor.org/stable/2998540</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_006">
<mixed-citation publication-type="journal"><string-name><surname>Bao</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Yue</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Rao</surname>, <given-names>Y.</given-names></string-name> (<year>2017</year>). <article-title>A deep learning framework for financial time series using stacked autoencoders and long-short term memory</article-title>. <source>PLoS ONE</source>, <volume>12</volume>(<issue>7</issue>), <elocation-id>e0180944</elocation-id>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1371/journal.pone.0180944" xlink:type="simple">https://doi.org/10.1371/journal.pone.0180944</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_007">
<mixed-citation publication-type="journal"><string-name><surname>Cavalli</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Amoretti</surname>, <given-names>M.</given-names></string-name> (<year>2021</year>). <article-title>Cnn-based multivariate data analysis for bitcoin trend prediction</article-title>. <source>Applied Soft Computing</source>, <volume>101</volume>, <elocation-id>107065</elocation-id>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.asoc.2020.107065" xlink:type="simple">https://doi.org/10.1016/j.asoc.2020.107065</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_008">
<mixed-citation publication-type="journal"><string-name><surname>Chen</surname>, <given-names>Z.</given-names></string-name>, <string-name><surname>Li</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Sun</surname>, <given-names>W.</given-names></string-name> (<year>2020</year>). <article-title>Bitcoin price prediction using machine learning: an approach to sample dimension engineering</article-title>. <source>Journal of Computational and Applied Mathematics</source>, <volume>365</volume>, <elocation-id>112395</elocation-id>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.cam.2019.112395" xlink:type="simple">https://doi.org/10.1016/j.cam.2019.112395</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_009">
<mixed-citation publication-type="journal"><string-name><surname>Čorić</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Matijević</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Marković</surname>, <given-names>D.</given-names></string-name> (<year>2023</year>). <article-title>PollenNet – a deep learning approach to predicting airborne pollen concentrations</article-title>. <source>Croatian Operational Research Review</source>, <volume>14</volume>(<issue>1</issue>), <fpage>1</fpage>–<lpage>13</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.17535/crorr.2023.0001" xlink:type="simple">https://doi.org/10.17535/crorr.2023.0001</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_010">
<mixed-citation publication-type="journal"><string-name><surname>Diebold</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Mariano</surname>, <given-names>R.</given-names></string-name> (<year>1995</year>). <article-title>Comparing predictive accuracy</article-title>. <source>Journal of Business Economic Statistics</source>, <volume>13</volume>, <fpage>253</fpage>–<lpage>263</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/07350015.1995.10524599" xlink:type="simple">https://doi.org/10.1080/07350015.1995.10524599</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_011">
<mixed-citation publication-type="journal"><string-name><surname>Fahmi</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Samsudin</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Mustapha</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Razali</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Ahmad Khalid</surname>, <given-names>S.K.</given-names></string-name> (<year>2018</year>). <article-title>Regression based analysis for bitcoin price prediction</article-title>. <source>International Journal of Engineering &amp; Technology</source>, <volume>7</volume>(<issue>4.38</issue>), <fpage>1070</fpage>–<lpage>1073</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.14419/ijet.v7i4.38.27642" xlink:type="simple">https://doi.org/10.14419/ijet.v7i4.38.27642</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_012">
<mixed-citation publication-type="journal"><string-name><surname>Fawaz</surname>, <given-names>H.I.</given-names></string-name>, <string-name><surname>Forestier</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Weber</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Idoumghar</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Muller</surname>, <given-names>P.A.</given-names></string-name> (<year>2019</year>). <article-title>Deep learning for time series classification: a review</article-title>. <source>Data Mining and Knowledge Discovery</source>, <volume>33</volume>, <fpage>917</fpage>–<lpage>963</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1007/s10618-019-00619-1" xlink:type="simple">https://doi.org/10.1007/s10618-019-00619-1</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_013">
<mixed-citation publication-type="book"><string-name><surname>Franses</surname>, <given-names>P.H.</given-names></string-name>, <string-name><surname>van Dijk</surname>, <given-names>D.</given-names></string-name> (<year>2003</year>). <source>Non-Linear Time Series Models in Empirical Finance</source>. <publisher-name>Cambridge University Press</publisher-name>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1017/CBO9780511754067" xlink:type="simple">https://doi.org/10.1017/CBO9780511754067</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_014">
<mixed-citation publication-type="other"><string-name><surname>Greaves</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Au</surname>, <given-names>B.</given-names></string-name> (<year>2015</year>). Using the bitcoin transaction graph to predict the price of bitcoin. <ext-link ext-link-type="uri" xlink:href="http://snap.stanford.edu/class/cs224w-2015/projects_2015/Using_the_Bitcoin_Transaction_Graph_to_Predict_the_Price_of_Bitcoin.pdf">http://snap.stanford.edu/class/cs224w-2015/projects_2015/Using_the_Bitcoin_Transaction_Graph_to_Predict_the_Price_of_Bitcoin.pdf</ext-link>. [29.3.2021].</mixed-citation>
</ref>
<ref id="j_infor561_ref_015">
<mixed-citation publication-type="journal"><string-name><surname>Han</surname>, <given-names>L.H.N.</given-names></string-name>, <string-name><surname>Hien</surname>, <given-names>N.L.H.</given-names></string-name>, <string-name><surname>Huy</surname>, <given-names>L V.</given-names></string-name>, <string-name><surname>Hieu</surname>, <given-names>N.V.</given-names></string-name> (<year>2024</year>). <article-title>A deep learning model for multi-domain MRI synthesis using generative adversarial networks</article-title>. <source>Informatica</source>, <volume>35</volume>(<issue>2</issue>), <fpage>283</fpage>–<lpage>309</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.15388/24-INFOR556" xlink:type="simple">https://doi.org/10.15388/24-INFOR556</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_016">
<mixed-citation publication-type="journal"><string-name><surname>Hegde</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Rokseth</surname>, <given-names>B.</given-names></string-name> (<year>2020</year>). <article-title>Applications of machine learning methods for engineering risk assessment – a review</article-title>. <source>Safety Science</source>, <volume>122</volume>, <elocation-id>104492</elocation-id>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.ssci.2019.09.015" xlink:type="simple">https://doi.org/10.1016/j.ssci.2019.09.015</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_017">
<mixed-citation publication-type="journal"><string-name><surname>Hornik</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Stinchcombe</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>White</surname>, <given-names>H.</given-names></string-name> (<year>1989</year>). <article-title>Multilayer feedforward networks are universal approximators</article-title>. <source>Neural Networks</source>, <volume>2</volume>(<issue>5</issue>), <fpage>359</fpage>–<lpage>366</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/0893-6080(89)90020-8" xlink:type="simple">https://doi.org/10.1016/0893-6080(89)90020-8</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_018">
<mixed-citation publication-type="journal"><string-name><surname>Hwarng</surname>, <given-names>H.</given-names></string-name> (<year>2001</year>). <article-title>Insights into neural-network forecasting of time series corresponding to ARMA(p, q) structures</article-title>. <source>Omega</source>, <volume>29</volume>(<issue>3</issue>), <fpage>273</fpage>–<lpage>289</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/S0305-0483(01)00022-6" xlink:type="simple">https://doi.org/10.1016/S0305-0483(01)00022-6</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_019">
<mixed-citation publication-type="journal"><string-name><surname>Indera</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Yassin</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Zabidi</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Rizman</surname>, <given-names>Z.</given-names></string-name> (<year>2017</year>). <article-title>Non-linear autoregressive with exogeneous input (NARX) bitcoin price prediction model using pso and moving average technical indicators</article-title>. <source>Journal of Fundamental and Applied Sciences</source>, <volume>9</volume>(<issue>3S</issue>), <fpage>791</fpage>–<lpage>808</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.4314/jfas.v9i3s.61" xlink:type="simple">https://doi.org/10.4314/jfas.v9i3s.61</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_020">
<mixed-citation publication-type="journal"><string-name><surname>Jang</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Lee</surname>, <given-names>J.</given-names></string-name> (<year>2017</year>). <article-title>An empirical study on modeling and prediction of bitcoin prices with bayesian neural networks based on blockchain information</article-title>. <source>IEEE Access</source>, <volume>6</volume>, <fpage>5427</fpage>–<lpage>5437</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/ACCESS.2017.2779181" xlink:type="simple">https://doi.org/10.1109/ACCESS.2017.2779181</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_021">
<mixed-citation publication-type="journal"><string-name><surname>Jaquart</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Köpke</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Weinhardt</surname>, <given-names>C.</given-names></string-name> (<year>2022</year>). <article-title>Machine learning for cryptocurrency market prediction and trading</article-title>. <source>The Journal of Finance and Data Science</source>, <volume>8</volume>, <fpage>331</fpage>–<lpage>352</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.jfds.2022.12.001" xlink:type="simple">https://doi.org/10.1016/j.jfds.2022.12.001</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_022">
<mixed-citation publication-type="journal"><string-name><surname>Ji</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Kim</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Im</surname>, <given-names>H.</given-names></string-name> (<year>2019</year>). <article-title>A comparative study of bitcoin price prediction using deep learning</article-title>. <source>Mathematics</source>, <volume>7</volume>(<issue>10</issue>), <fpage>898</fpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.3390/math7100898" xlink:type="simple">https://doi.org/10.3390/math7100898</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_023">
<mixed-citation publication-type="journal"><string-name><surname>Kalinić Milićević</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Marasović</surname>, <given-names>B.</given-names></string-name> (<year>2023</year>). <article-title>What factors influence bitcoin’s daily price direction from the perspective of machine learning classifiers?</article-title> <source>Croatian Operational Research Review</source>, <volume>14</volume>(<issue>2</issue>), <fpage>163</fpage>–<lpage>177</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.17535/crorr.2023.0014" xlink:type="simple">https://doi.org/10.17535/crorr.2023.0014</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_024">
<mixed-citation publication-type="journal"><string-name><surname>Lahmiri</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Bekiros</surname>, <given-names>S.</given-names></string-name> (<year>2019</year>). <article-title>Cryptocurrency forecasting with deep learning chaotic neural networks</article-title>. <source>Chaos, Solitons &amp; Fractals</source>, <volume>118</volume>, <fpage>35</fpage>–<lpage>40</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.chaos.2018.11.014" xlink:type="simple">https://doi.org/10.1016/j.chaos.2018.11.014</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_025">
<mixed-citation publication-type="journal"><string-name><surname>Li</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Dai</surname>, <given-names>W.</given-names></string-name> (<year>2020</year>). <article-title>Bitcoin price forecasting method based on CNN-LSTM hybrid neural network model</article-title>. <source>The Journal of Engineering</source>, <volume>2020</volume>(<issue>13</issue>), <fpage>344</fpage>–<lpage>347</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1049/joe.2019.1203" xlink:type="simple">https://doi.org/10.1049/joe.2019.1203</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_026">
<mixed-citation publication-type="journal"><string-name><surname>Liu</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Tsyvinski</surname>, <given-names>A.</given-names></string-name> (<year>2020</year>). <article-title>Risks and returns of cryptocurrency</article-title>. <source>The Review of Financial Studies</source>, <volume>34</volume>(<issue>6</issue>), <fpage>2689</fpage>–<lpage>2727</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1093/rfs/hhaa113" xlink:type="simple">https://doi.org/10.1093/rfs/hhaa113</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_027">
<mixed-citation publication-type="journal"><string-name><surname>Livieris</surname>, <given-names>I.E.</given-names></string-name>, <string-name><surname>Kiriakidou</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Stavroyiannis</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Pintelas</surname>, <given-names>P.</given-names></string-name> (<year>2021</year>). <article-title>An advanced CNN-LSTM model for cryptocurrency forecasting</article-title>. <source>Electronics</source>, <volume>10</volume>(<issue>3</issue>), <fpage>287</fpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.3390/electronics10030287" xlink:type="simple">https://doi.org/10.3390/electronics10030287</ext-link>. <comment><uri>https://www.mdpi.com/2079-9292/10/3/287</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_028">
<mixed-citation publication-type="journal"><string-name><surname>Madaeni</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Chokmani</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Lhissou</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Homayouni</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Gauthier</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Tolszczuk-Leclerc</surname>, <given-names>S.</given-names></string-name> (<year>2022</year>). <article-title>Convolutional neural network and long shortterm memory models for ICE-JAM predictions</article-title>. <source>The Cryosphere</source>, <volume>16</volume>(<issue>4</issue>), <fpage>1447</fpage>–<lpage>1468</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.5194/tc-16-1447-2022" xlink:type="simple">https://doi.org/10.5194/tc-16-1447-2022</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_029">
<mixed-citation publication-type="journal"><string-name><surname>Moshiri</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Cameron</surname>, <given-names>N.</given-names></string-name> (<year>2000</year>). <article-title>Neural network versus econometric models in forecasting inflation</article-title>. <source>Journal of Forecasting</source>, <volume>19</volume>(<issue>3</issue>), <fpage>201</fpage>–<lpage>217</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1002/(sici)1099-131x(200004)19:3lt; 201::aid-for753gt;3.0.co;2-4" xlink:type="simple">https://doi.org/10.1002/(sici)1099-131x(200004)19:3lt; 201::aid-for753gt;3.0.co;2-4</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_030">
<mixed-citation publication-type="journal"><string-name><surname>Pabuccu</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Ongan</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Ongan</surname>, <given-names>A.</given-names></string-name> (<year>2020</year>). <article-title>Forecasting the movements of bitcoin prices: an application of machine learning algorithms</article-title>. <source>Quantitative Finance and Economics</source>, <volume>4</volume>(<issue>4</issue>), <fpage>679</fpage>–<lpage>692</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.3934/qfe.2020031" xlink:type="simple">https://doi.org/10.3934/qfe.2020031</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_031">
<mixed-citation publication-type="other"><string-name><surname>Paranhos</surname>, <given-names>L.</given-names></string-name> (<year>2021</year>). Predicting Inflation with Neural Networks. arXiv:<ext-link ext-link-type="uri" xlink:href="https://arxiv.org/abs/2104.03757">2104.03757</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_032">
<mixed-citation publication-type="book"><string-name><surname>Patterson</surname>, <given-names>D.W.</given-names></string-name> (<year>1998</year>). <source>Artificial Neural Networks: Theory and Applications</source>, <edition>1</edition>st ed. <publisher-name>Prentice Hall PTR</publisher-name>, <publisher-loc>USA</publisher-loc>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_033">
<mixed-citation publication-type="journal"><string-name><surname>Polasik</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Piotrowska</surname>, <given-names>A.I.</given-names></string-name>, <string-name><surname>Wisniewski</surname>, <given-names>T.P.</given-names></string-name>, <string-name><surname>Kotkowski</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Lightfoot</surname>, <given-names>G.</given-names></string-name> (<year>2015</year>). <article-title>Price fluctuations and the use of bitcoin: an empirical inquiry</article-title>. <source>International Journal of Electronic Commerce</source>, <volume>20</volume>(<issue>1</issue>), <fpage>9</fpage>–<lpage>49</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/10864415.2016.1061413" xlink:type="simple">https://doi.org/10.1080/10864415.2016.1061413</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_034">
<mixed-citation publication-type="other"><string-name><surname>Poyser</surname>, <given-names>O.</given-names></string-name> (<year>2017</year>). Exploring the Determinants of Bitcoin’s Price: An Application of Bayesian Structural Time Series. arXiv:<ext-link ext-link-type="uri" xlink:href="https://arxiv.org/abs/1706.01437">1706.01437</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_035">
<mixed-citation publication-type="journal"><string-name><surname>Purwandari</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Sigalingging</surname>, <given-names>J.W.C.</given-names></string-name>, <string-name><surname>Cenggoro</surname>, <given-names>T.W.</given-names></string-name>, <string-name><surname>Pardamean</surname>, <given-names>B.</given-names></string-name> (<year>2021</year>). <article-title>Multi-class weather forecasting from twitter using machine learning aprroaches</article-title>. <source>Procedia Computer Science</source>, <volume>179</volume>, <fpage>47</fpage>–<lpage>54</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.procs.2020.12.006" xlink:type="simple">https://doi.org/10.1016/j.procs.2020.12.006</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_036">
<mixed-citation publication-type="other"><string-name><surname>Sezer</surname>, <given-names>O.B.</given-names></string-name>, <string-name><surname>Gudelek</surname>, <given-names>M.U.</given-names></string-name>, <string-name><surname>Ozbayoglu</surname>, <given-names>A.M.</given-names></string-name> (2020). Financial Time Series Forecasting with Deep Learning: A Systematic Literature Review: 2005–2019. arXiv:<ext-link ext-link-type="uri" xlink:href="https://arxiv.org/abs/1911.13288">1911.13288</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_037">
<mixed-citation publication-type="journal"><string-name><surname>Sovbetov</surname>, <given-names>Y.</given-names></string-name> (<year>2018</year>). <article-title>Factors influencing cryptocurrency prices: evidence from bitcoin, ethereum, dash, litcoin, and monero</article-title>. <source>Journal of Economics and Financial Analysis</source>, <volume>2</volume>(<issue>2</issue>), <fpage>1</fpage>–<lpage>27</lpage>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_038">
<mixed-citation publication-type="other"><string-name><surname>Spilak</surname>, <given-names>B.</given-names></string-name> (<year>2018</year>). <italic>Deep Neural Networks for Cryptocurrencies Price Prediction</italic>. PhD thesis.</mixed-citation>
</ref>
<ref id="j_infor561_ref_039">
<mixed-citation publication-type="chapter"><string-name><surname>Šestanović</surname>, <given-names>T.</given-names></string-name> (<year>2021</year>). <chapter-title>Bitcoin direction forecasting using neural networks</chapter-title>. In: <source>The 16th International Symposium on Operational Research SOR’21, Proceedings</source>, pp. <fpage>557</fpage>–<lpage>562</lpage>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_040">
<mixed-citation publication-type="journal"><string-name><surname>Šestanović</surname>, <given-names>T.</given-names></string-name> (<year>2024</year>). <article-title>A comprehensive approach to Bitcoin forecasting using neural networks</article-title>. <source>Ekonomski pregled</source>, <volume>75</volume>(<issue>1</issue>), <fpage>62</fpage>–<lpage>85</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.32910/ep.75.1.3" xlink:type="simple">https://doi.org/10.32910/ep.75.1.3</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_041">
<mixed-citation publication-type="journal"><string-name><surname>Šestanović</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Arnerić</surname>, <given-names>J.</given-names></string-name> (<year>2020</year>). <article-title>Neural network structure identification in inflation forecasting</article-title>. <source>Journal of Forecasting</source>, <volume>40</volume>(<issue>1</issue>), <fpage>62</fpage>–<lpage>79</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1002/for.2698" xlink:type="simple">https://doi.org/10.1002/for.2698</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_042">
<mixed-citation publication-type="chapter"><string-name><surname>Šestanović</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Kalinić Milićević</surname>, <given-names>T.</given-names></string-name> (<year>2023</year>). <chapter-title>A MCDM approach to machine learning model selection: Bitcoin return forecasting</chapter-title>. In: <source>Proceedings of the 17th International Symposium on Operational Research in Slovenia SOR’23</source>, <conf-loc>Ljubljana, University of Maribor</conf-loc>, pp. <fpage>77</fpage>–<lpage>82</lpage>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_043">
<mixed-citation publication-type="journal"><string-name><surname>Trimborn</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Li</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Härdle</surname>, <given-names>W.K.</given-names></string-name> (<year>2019</year>). <article-title>Investing with cryptocurrencies – a liquidity constrained investment approach</article-title>. <source>Journal of Financial Econometrics</source>, <volume>18</volume>(<issue>2</issue>), <fpage>280</fpage>–<lpage>306</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1093/jjfinec/nbz016" xlink:type="simple">https://doi.org/10.1093/jjfinec/nbz016</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_044">
<mixed-citation publication-type="journal"><string-name><surname>Uras</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Marchesi</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Marchesi</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Tonelli</surname>, <given-names>R.</given-names></string-name> (<year>2020</year>). <article-title>Forecasting bitcoin closing price series using linear regression and neural networks models</article-title>. <source>PeerJ Computer Science</source>, <volume>6</volume>, <elocation-id>e279</elocation-id>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.7717/peerj-cs.279" xlink:type="simple">https://doi.org/10.7717/peerj-cs.279</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_045">
<mixed-citation publication-type="journal"><string-name><surname>Walther</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Klein</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Bouri</surname>, <given-names>E.</given-names></string-name> (<year>2019</year>). <article-title>Exogenous drivers of bitcoin and cryptocurrency volatility – a mixed data sampling approach to forecasting</article-title>. <source>Journal of International Financial Markets, Institutions and Money</source>, <volume>63</volume>, <elocation-id>101133</elocation-id>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.intfin.2019.101133" xlink:type="simple">https://doi.org/10.1016/j.intfin.2019.101133</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor561_ref_046">
<mixed-citation publication-type="journal"><string-name><surname>Zhang</surname>, <given-names>Z.</given-names></string-name>, <string-name><surname>Dai</surname>, <given-names>H.N.</given-names></string-name>, <string-name><surname>Zhou</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Mondal</surname>, <given-names>S.K.</given-names></string-name>, <string-name><surname>García</surname>, <given-names>M.M.</given-names></string-name>, <string-name><surname>Wang</surname>, <given-names>H.</given-names></string-name> (<year>2021</year>). <article-title>Forecasting cryptocurrency price using convolutional neural networks with weighted and attentive memory channels</article-title>. <source>Expert Systems with Applications</source>, <volume>183</volume>, <elocation-id>115378</elocation-id>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.eswa.2021.115378" xlink:type="simple">https://doi.org/10.1016/j.eswa.2021.115378</ext-link>.</mixed-citation>
</ref>
</ref-list>
</back>
</article>
