<?xml version="1.0" encoding="utf-8"?><!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd"><article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">INFORMATICA</journal-id>
<journal-title-group><journal-title>Informatica</journal-title></journal-title-group>
<issn pub-type="epub">1822-8844</issn><issn pub-type="ppub">0868-4952</issn><issn-l>0868-4952</issn-l>
<publisher>
<publisher-name>Vilnius University</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">INFO1220</article-id>
<article-id pub-id-type="doi">10.15388/Informatica.2019.209</article-id>
<article-categories><subj-group subj-group-type="heading">
<subject>Research Article</subject></subj-group></article-categories>
<title-group>
<article-title>Hyperspectral Image Classification Using Isomap with SMACOF</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Orts Gómez</surname><given-names>Francisco José</given-names></name><email xlink:href="francisco.orts@ual.es">francisco.orts@ual.es</email><xref ref-type="aff" rid="j_info1220_aff_001">1</xref><xref ref-type="corresp" rid="cor1">∗</xref><bio>
<p><bold>F.J. Orts Gómez</bold> is a predoctoral researcher at the Informatics Department at University of Almería, Spain. He studied the master in computer engineering at the University of Almería. He is currently doing his PhD thanks to the Spanish FPI program. His publications and more information about him can be found in <uri>http://hpca.ual.es/~forts/</uri>. His research interests are multiDimensional scaling, quantum computation and high performance computing.</p></bio>
</contrib>
<contrib contrib-type="author">
<name><surname>Ortega López</surname><given-names>Gloria</given-names></name><email xlink:href="gloriaortega@uma.es">gloriaortega@uma.es</email><xref ref-type="aff" rid="j_info1220_aff_002">2</xref><bio>
<p><bold>G. Ortega López</bold> (<uri>https://sites.google.com/site/gloriaortegalopez/</uri>) received the PhD degree from the University of Almería (Spain) in 2014. From 2009, she has been working as a member of the TIC-146 supercomputing-algorithms research group. Currently, she has a post-doctoral fellowship at the University of Málaga and her current research work is focused on high performance computing and optimization. Some of her research interest includes the study of strategies for load balancing the workload on heterogeneous systems, the parallelization of optimization problems and image processing.</p></bio>
</contrib>
<contrib contrib-type="author">
<name><surname>Filatovas</surname><given-names>Ernestas</given-names></name><email xlink:href="ernest.filatov@gmail.com">ernest.filatov@gmail.com</email><xref ref-type="aff" rid="j_info1220_aff_003">3</xref><bio>
<p><bold>E. Filatovas</bold> received the PhD in informatics engineering from the Vilnius University in 2012, Lithuania. He is currently a senior researcher at Vilnius University, and an associate professor at of Vilnius Gediminas Technical University. His main research interests include blockchain technologies, global optimization, multi-objective optimization, multi-objective evolutionary algorithms, multiple criteria decision making, high-performance computing, and image processing. He has published more than 20 scientific papers.</p></bio>
</contrib>
<contrib contrib-type="author">
<name><surname>Kurasova</surname><given-names>Olga</given-names></name><email xlink:href="olga.kurasova@mii.vu.lt">olga.kurasova@mii.vu.lt</email><xref ref-type="aff" rid="j_info1220_aff_003">3</xref><bio>
<p><bold>O. Kurasova</bold> received the doctoral degree in computer science (PhD) from Institute of Mathematics and Informatics jointly with Vytautas Magnus University in 2005. Recent employment is at the Institute of Data Science and Digital Technologies of the Vilnius University as a principal researcher and professor. Research interests include data mining methods, optimization theory and applications, artificial intelligence, neural networks, visualization of multidimensional data, multiple criteria decision support, parallel computing, image processing. She is the author of more than 70 scientific publications.</p></bio>
</contrib>
<contrib contrib-type="author">
<name><surname>Garzón</surname><given-names>Gracia Ester Martın</given-names></name><email xlink:href="gmartin@ual.es">gmartin@ual.es</email><xref ref-type="aff" rid="j_info1220_aff_001">1</xref>
</contrib>
<aff id="j_info1220_aff_001"><label>1</label>Group of Supercomputation-Algorithms, Department of Informatics, <institution>University of Almería</institution>, ceiA3, 04120, Almería, <country>Spain</country></aff>
<aff id="j_info1220_aff_002"><label>2</label>Computer Architecture Department, Campus Teatinos, <institution>Universidad de Málaga</institution>, 29010, Málaga, <country>Spain</country></aff>
<aff id="j_info1220_aff_003"><label>3</label>Institute of Data Science and Digital Technologies, <institution>Vilnius University</institution>, Akademijos str. 4, LT-08663, Vilnius, <country>Lithuania</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>∗</label>Corresponding author.</corresp>
</author-notes>
<pub-date pub-type="ppub"><year>2019</year></pub-date>
<pub-date pub-type="epub"><day>1</day><month>1</month><year>2019</year></pub-date><volume>30</volume><issue>2</issue><fpage>349</fpage><lpage>365</lpage><history><date date-type="received"><month>10</month><year>2018</year></date><date date-type="accepted"><month>1</month><year>2019</year></date></history>
<permissions><copyright-statement>© 2019 Vilnius University</copyright-statement><copyright-year>2019</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>Open access article under the <ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">CC BY</ext-link> license.</license-p></license></permissions>
<abstract>
<p>The isometric mapping (Isomap) algorithm is often used for analysing hyperspectral images. Isomap allows to reduce such hyperspectral images from a high-dimensional space into a lower-dimensional space, keeping the critical original information. To achieve such objective, Isomap uses the state-of-the-art MultiDimensional Scaling method (MDS) for dimensionality reduction. In this work, we propose to use Isomap with SMACOF, since SMACOF is the most accurate MDS method. A deep comparison, in terms of accuracy, between Isomap based on an eigen-decomposition process and Isomap based on SMACOF has been carried out using three benchmark hyperspectral images. Moreover, for the hyperspectral image classification, three classifiers (support vector machine, <italic>k</italic>-nearest neighbour, and Random Forest) have been used to compare both Isomap approaches. The experimental investigation has shown that better classification accuracy is obtained by Isomap with SMACOF.</p>
</abstract>
<kwd-group>
<label>Key words</label>
<kwd>dimensionality reduction</kwd>
<kwd>hyperspectral imaging</kwd>
<kwd>isometric mapping (Isomap)</kwd>
<kwd>manifold learning</kwd>
<kwd>SMACOF algorithm</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="j_info1220_s_001">
<label>1</label>
<title>Introduction</title>
<p>HyperSpectral Images (HSIs) contain an exhaustive variety of information about specific characteristics of the materials, with hundreds or even thousands bands (Borengasser <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_007">2007</xref>). The spectrum of each pixel can be seen as a vector, where each component represents the luminosity of the reflectance value for each spectral band. The set of bands which composes an HSI shows the representation of a scene, but each one individually contains information from a different wavelength range, which can cover both the visible and infrared spectrum. The width of each band can be between 5 and 10 nm, depending on the considered sensor. Each material throws a different reflectance profile for all the bands. Thus, for each point of the image, a specific curve that provides a lot of information for the corresponding point of the scene is obtained. Therefore, to efficiently exploit this information in applications, classification of HSIs is usually performed, where pixels are labelled to one of the classes based on their spectral characteristics.</p>
<p>There are many applications which take advantage of a large amount of information provided by hyperspectral sensors, such as remote sensing (Wang <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_046">2017</xref>), biotechnology (Asaari <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_003">2018</xref>), medical diagnose (Leavesley <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_029">2018</xref>), forensic science (Almeida <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_001">2017</xref>), environmental monitoring (Virlet <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_044">2017</xref>), etc. This available information leads us to develop new processing techniques. In addition, many applications which work with HSIs require a fast response. Examples of these applications may be obtained in the areas of modelling and environmental assessment, detection of military objectives or prevention and response to risks, such as forest fires, rescue operations, floods or biological threats (Chang <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_011">2001</xref>; Manolakis <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_033">2003</xref>).</p>
<p>However, the large amount of information contained in an HSI, which is its main advantage, is also a disadvantage in terms of computational performance. The work with large HSIs involves a high computational complexity and requires a lot of resources and time (Rizzo <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_039">2005</xref>). On the other hand, it is well-known that high-dimensional data spaces are mostly empty. This indicates that the data structure of an HSI exists basically in a subspace (Plaza <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_037">2005</xref>). Taking into account these ideas, it can be concluded that there is a need (and a possibility) to reduce the size of the HSIs. So, it is usual to apply techniques to reduce the dimensions of the original HSIs, obtaining reduced images which can be handled in a more efficient way without losing critical information (Harsanyi and Chang, <xref ref-type="bibr" rid="j_info1220_ref_025">1994</xref>; Bruce <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_010">2002</xref>; Wang and Chang, <xref ref-type="bibr" rid="j_info1220_ref_045">2006</xref>).</p>
<p>Multidimensional Scaling (MDS) consists of a set of techniques which are used to reduce the dimensions of a data set. Such techniques are used in many applications – multiobjective optimization (Filatovas <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_019">2015</xref>), data mining (Medvedev <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_034">2017</xref>), (Bernatavičienė <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_006">2007</xref>), marketing (Green, <xref ref-type="bibr" rid="j_info1220_ref_022">1975</xref>), cryptography (Gupta and Ray, <xref ref-type="bibr" rid="j_info1220_ref_024">2015</xref>), a wide variety of mathematical and statistical methods (Granato and Ares, <xref ref-type="bibr" rid="j_info1220_ref_021">2014</xref>), psychology (Rosenberg, <xref ref-type="bibr" rid="j_info1220_ref_040">2014</xref>), etc. They use a mapping function usually based on Euclidean distances which is able to find an optimal data representation. However, also other distance metrics could be considered (Fletcher <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_020">2014</xref>). MDS techniques represent data in a low-dimensional space in order to make these data more accessible (Borg and Groenen, <xref ref-type="bibr" rid="j_info1220_ref_008">2005</xref>; Dzemyda <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_017">2013</xref>). For instance, a graphical visualization of the data in <inline-formula id="j_info1220_ineq_001"><alternatives><mml:math>
<mml:mn>2</mml:mn>
<mml:mi mathvariant="italic">D</mml:mi></mml:math><tex-math><![CDATA[$2D$]]></tex-math></alternatives></inline-formula> or <inline-formula id="j_info1220_ineq_002"><alternatives><mml:math>
<mml:mn>3</mml:mn>
<mml:mi mathvariant="italic">D</mml:mi></mml:math><tex-math><![CDATA[$3D$]]></tex-math></alternatives></inline-formula> space for an easier understanding of the information.</p>
<p>A well-known technique named Isometric mapping (Isomap) generalizes MDS to non-linear manifolds, replacing Euclidean distances by geodesic distances (Bengio <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_005">2004</xref>). Isomap has been used successfully in a multitude of applications, such as HSIs (Li <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_030">2017</xref>), face recognition (Yang, <xref ref-type="bibr" rid="j_info1220_ref_050">2002a</xref>), biomedical datasets (Lim <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_031">2003</xref>), pattern classification (Yang, <xref ref-type="bibr" rid="j_info1220_ref_051">2002b</xref>), learning multi-class manifold (Wu and Chan, <xref ref-type="bibr" rid="j_info1220_ref_048">2004</xref>), supervised learning (Pulkkinen <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_038">2011</xref>), etc. Focusing on the HSIs, Isomap could be used in their reductions, achieving images with almost the same accuracy than the original but with fewer bands (Li <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_030">2017</xref>). The main goal here is to reduce the number of bands keeping the critical information they contain. Isomap is able to find hidden patterns in the bands and to reproduce the same pattern but with less bands.</p>
<p>Isomap often uses classical scaling such as eigen-decomposition as a part of its process. Classical scaling is a MDS method to reconstruct a configuration from the interpoint distance, which achieves a good accuracy and has a feasible computing cost (Sibson, <xref ref-type="bibr" rid="j_info1220_ref_041">1979</xref>). However, any MDS method could be used.</p>
<p>The main contribution of this paper is the use of Isomap based on SMACOF (Scaling by MAjorizing a COmplicated), which is considered to be the most accurate MDS method (Borg and Groenen, <xref ref-type="bibr" rid="j_info1220_ref_008">2005</xref>), and used when solving various MDS problems in social and behavioural sciences, marketing, biometrics, and ecology. Nevertheless, it is also one of the most computationally demanding methods (Ingram <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_027">2009</xref>). In previous work (Li <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_030">2017</xref>), where Isomap is studied in depth, authors consider classical scaling methods such as an eigen-decomposition process. However, our propose is to consider Isomap based on SMACOF due to its high accuracy. In this paper, the obtained results of both strategies, Isomap using eigen-decomposition and Isomap based on SMACOF, are compared in terms of classification accuracy. Such comparison is carried out by means of three popular HSIs and the same configurations in both cases.</p>
<p>The paper is organized as follows. In Section <xref rid="j_info1220_s_002">2</xref>, the description of the Isomap method is provided. Section <xref rid="j_info1220_s_003">3</xref> describes the SMACOF algorithm. In Section <xref rid="j_info1220_s_004">4</xref>, the results obtained after applying two versions of Isomap (with eigen-decomposition and with SMACOF) on several test images are discussed. Finally, we conclude this work in Section <xref rid="j_info1220_s_005">5</xref>.</p>
</sec>
<sec id="j_info1220_s_002">
<label>2</label>
<title>Isomap</title>
<p>Isomap is a manifold learning algorithm which can reduce the data redundancy preserving the original geometry of it. Isomap estimates the geodesic distance between all the items, given only input-space distances. For the points which are neighbours, input-space is an accurate approximation to the geodesic distance. For the distant ones, the geodesic distance can be computed as the addition of a sequence of distances between neighbouring points. The main idea is to find the shortest paths in a graph with edges connecting neighbouring data points (Tenenbaum <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_043">2000</xref>).</p>
<p>Isomap tries to build a matrix which contains all the minimum (geodesic) distances between the <italic>m</italic> items which are contained in a data set <italic>X</italic> (an HSI in our case), and then it reduces such matrix. In detail, the algorithm has three steps. They are shown in Algorithm <xref rid="j_info1220_fig_001">1</xref> and described below:</p>
<fig id="j_info1220_fig_001">
<label>Algorithm 1</label>
<caption>
<p>Isomap(<italic>m</italic>, <italic>b</italic>, <italic>X</italic>, <italic>l</italic>, <italic>k</italic>, <italic>s</italic>, <inline-formula id="j_info1220_ineq_003"><alternatives><mml:math>
<mml:mi mathvariant="italic">imax</mml:mi></mml:math><tex-math><![CDATA[$\mathit{imax}$]]></tex-math></alternatives></inline-formula>, <italic>ϵ</italic>)</p>
</caption>
<graphic xlink:href="info1220_g001.jpg"/>
</fig>
<fig id="j_info1220_fig_002">
<label>Algorithm 2</label>
<caption>
<p>KNN(<italic>m</italic>, <italic>b</italic>, <italic>X</italic>, <italic>k</italic>, <italic>l</italic>, <italic>j</italic>)</p>
</caption>
<graphic xlink:href="info1220_g002.jpg"/>
</fig>
<list>
<list-item id="j_info1220_li_001">
<label>1.</label>
<p>To set a number <italic>l</italic> of neighbours. This number will be the same for all the items (points) <inline-formula id="j_info1220_ineq_004"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${X_{i}}$]]></tex-math></alternatives></inline-formula>. Then, to determine the neighbours for every item <inline-formula id="j_info1220_ineq_005"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${X_{i}}$]]></tex-math></alternatives></inline-formula> finding the <italic>l</italic> nearest points, taking into account that two points <inline-formula id="j_info1220_ineq_006"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${X_{i}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_info1220_ineq_007"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${X_{j}}$]]></tex-math></alternatives></inline-formula> cannot be neighbours if the distance between them is greater than a fixed value <italic>k</italic>. Euclidean distances between the <italic>m</italic> items are used. In this way, a graph <italic>G</italic> is constructed. Algorithm <xref rid="j_info1220_fig_002">2</xref> describes the <italic>l</italic>-nearest neighbour (KNN) algorithm, which is commonly used to build neighbourhoods (Tay <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_042">2014</xref>).</p>
</list-item>
<list-item id="j_info1220_li_002">
<label>2.</label>
<p>To calculate the shortest distance between all pair of points in <italic>G</italic>. When <inline-formula id="j_info1220_ineq_008"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${X_{i}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_info1220_ineq_009"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${X_{j}}$]]></tex-math></alternatives></inline-formula> are neighbours, their distance is Euclidean. However, when the points are not neighbours, the distance is computed as the shortest path between all possible ones in <italic>G</italic> which connects <inline-formula id="j_info1220_ineq_010"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${X_{i}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_info1220_ineq_011"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${X_{j}}$]]></tex-math></alternatives></inline-formula>, that is, <inline-formula id="j_info1220_ineq_012"><alternatives><mml:math>
<mml:mi mathvariant="italic">d</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mi mathvariant="italic">m</mml:mi>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mi mathvariant="italic">n</mml:mi></mml:math><tex-math><![CDATA[$d({X_{i}},{X_{j}})=min$]]></tex-math></alternatives></inline-formula>{<inline-formula id="j_info1220_ineq_013"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">d</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">G</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">d</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">G</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">n</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">d</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">G</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">n</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[${d_{G}}({X_{i}},{X_{j}}),{d_{G}}({X_{i}},{X_{n}})+{d_{G}}({X_{n}},{X_{j}})$]]></tex-math></alternatives></inline-formula>}, where <inline-formula id="j_info1220_ineq_014"><alternatives><mml:math>
<mml:mi mathvariant="italic">n</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mo>…</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">m</mml:mi></mml:math><tex-math><![CDATA[$n=1,\dots ,m$]]></tex-math></alternatives></inline-formula>. As a result of this step, an <inline-formula id="j_info1220_ineq_015"><alternatives><mml:math>
<mml:mi mathvariant="italic">m</mml:mi>
<mml:mo>×</mml:mo>
<mml:mi mathvariant="italic">m</mml:mi></mml:math><tex-math><![CDATA[$m\times m$]]></tex-math></alternatives></inline-formula> matrix which contains the short distances Δ, is obtained. In this work, Dijkstra’s algorithm has been used to calculate the shortest paths among <italic>G</italic> according to Algorithm <xref rid="j_info1220_fig_003">3</xref> (Dijkstra, <xref ref-type="bibr" rid="j_info1220_ref_016">1959</xref>). Authors in Deng <italic>et al.</italic> (<xref ref-type="bibr" rid="j_info1220_ref_015">2012</xref>) explain Dijkstra’s algorithm as these steps:</p>
<list>
<list-item id="j_info1220_li_003">
<label>•</label>
<p>To initialize all nodes to <italic>∞</italic>, except the initial, which is set to 0. Neighbours already have their distances. To mark all nodes as unvisited, as it is shown in Fig. <xref rid="j_info1220_fig_004">1</xref>(a).</p>
</list-item>
<list-item id="j_info1220_li_004">
<label>•</label>
<p>To consider all the unvisited neighbours and to calculate their distances through each node. For every neighbour, to compare this distance with its previous distance and to assign the smallest one to the node. An example is shown in Fig. <xref rid="j_info1220_fig_004">1</xref>(b).</p>
</list-item>
<list-item id="j_info1220_li_005">
<label>•</label>
<p>When all the neighbours have been considered, to mark the current node as visited. A visited node will never be checked again. Move to the next unvisited node with the smallest distance and to repeat the previous steps, as it is shown in Fig. <xref rid="j_info1220_fig_004">1</xref>(c).</p>
</list-item>
<list-item id="j_info1220_li_006">
<label>•</label>
<p>If the final node has been marked as visited or if there is no path between the initial and the final node (all paths have a step marked as infinite), then the algorithm has finished. The final step is shown in Fig. <xref rid="j_info1220_fig_004">1</xref>(d).</p>
</list-item>
</list>
</list-item>
<list-item id="j_info1220_li_007">
<label>3.</label>
<p>To apply any MDS method to the shortest distances (Δ). Particularly, in this work, SMACOF and the eigen-decomposition methods are considered.</p>
</list-item>
</list>
<p>To evaluate the accuracy of Isomap based on SMACOF and eigen-decomposition methods for HSIs, a classification process with several classifiers – the Support Vector Machine (SVM) (Cortes and Vapnik, <xref ref-type="bibr" rid="j_info1220_ref_013">1995</xref>), the KNN classifier (Altman, <xref ref-type="bibr" rid="j_info1220_ref_002">1992</xref>) and the Random Forest algorithm (Breiman, <xref ref-type="bibr" rid="j_info1220_ref_009">2001</xref>) – has been used.</p>
<fig id="j_info1220_fig_003">
<label>Algorithm 3</label>
<caption>
<p>Dijkstra(<italic>m</italic>, <italic>G</italic>)</p>
</caption>
<graphic xlink:href="info1220_g003.jpg"/>
</fig>
<fig id="j_info1220_fig_004">
<label>Fig. 1</label>
<caption>
<p>Steps of the Djikstra’s algorithm.</p>
</caption>
<graphic xlink:href="info1220_g004.jpg"/>
</fig>
</sec>
<sec id="j_info1220_s_003">
<label>3</label>
<title>SMACOF</title>
<p>SMACOF, as other MDS methods, is used for the analysis of similarity data on a set of items. As it has been mentioned before, SMACOF is the most accurate MDS technique (Ingram <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_027">2009</xref>). Its objective is to find a set of points <inline-formula id="j_info1220_ineq_016"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mo>…</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">m</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">≡</mml:mo>
<mml:mi mathvariant="italic">Y</mml:mi></mml:math><tex-math><![CDATA[${Y_{1}},{Y_{2}},\dots ,{Y_{m}}\equiv Y$]]></tex-math></alternatives></inline-formula> in a low-dimensional space <inline-formula id="j_info1220_ineq_017"><alternatives><mml:math>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="double-struck">R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">s</mml:mi>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[${\mathbb{R}^{s}}$]]></tex-math></alternatives></inline-formula>, <inline-formula id="j_info1220_ineq_018"><alternatives><mml:math>
<mml:mi mathvariant="italic">s</mml:mi>
<mml:mo mathvariant="normal">&lt;</mml:mo>
<mml:mi mathvariant="italic">b</mml:mi></mml:math><tex-math><![CDATA[$s<b$]]></tex-math></alternatives></inline-formula> (where <italic>b</italic> is the original number of dimensions), taking into account that the distances between these points must be as similar as possible to the distance between the original points <inline-formula id="j_info1220_ineq_019"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mo>…</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">m</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">≡</mml:mo>
<mml:mi mathvariant="italic">X</mml:mi></mml:math><tex-math><![CDATA[${X_{1}},{X_{2}},\dots ,{X_{m}}\equiv X$]]></tex-math></alternatives></inline-formula> (Orts <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_036">2018</xref>). The key is the stress function (Eq. (<xref rid="j_info1220_eq_001">1</xref>)). The less stress, the better results, since it measures the difference between the distances of the original points and the distances of the points in the low-dimensional space. In Eq. (<xref rid="j_info1220_eq_001">1</xref>), <italic>δ</italic> represents the distance between points of <italic>X</italic>, and <italic>d</italic> does it between points of <italic>Y</italic>. 
<disp-formula id="j_info1220_eq_001">
<label>(1)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">E</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">MDS</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:mo largeop="true" movablelimits="false">∑</mml:mo></mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo mathvariant="normal">&lt;</mml:mo>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:munder>
<mml:msup>
<mml:mrow>
<mml:mo maxsize="1.19em" minsize="1.19em" fence="true" mathvariant="normal">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">δ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>−</mml:mo>
<mml:mi mathvariant="italic">d</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo maxsize="1.19em" minsize="1.19em" fence="true" mathvariant="normal">)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ {E_{\mathit{MDS}}}=\sum \limits_{i<j}{\big({\delta _{ij}}-d({Y_{i}},{Y_{j}})\big)^{2}}.\]]]></tex-math></alternatives>
</disp-formula>
</p>
<p>The majorizing concept, which implies to approximate a big or complex function through another smaller or simpler, is used by SMACOF to achieve the reduction of the stress (Groenen <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_023">1995</xref>). It consists of finding a new function iteratively. The new function will be located over the complex one, touching it at a point called <italic>supporting</italic> point (Fig. <xref rid="j_info1220_fig_005">2</xref>). Each iteration brings the minimum of the new function closer to the minimum of the original one, that is, the stress function (Borg and Groenen, <xref ref-type="bibr" rid="j_info1220_ref_008">2005</xref>; Mairal <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_032">2014</xref>). In De Leeuw and Mair (<xref ref-type="bibr" rid="j_info1220_ref_014">2011</xref>), the majorization is defined in the following steps:</p>
<list>
<list-item id="j_info1220_li_008">
<label>1.</label>
<p>To choose an initial value <inline-formula id="j_info1220_ineq_020"><alternatives><mml:math>
<mml:mi mathvariant="italic">y</mml:mi>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[$y={y_{0}}$]]></tex-math></alternatives></inline-formula>.</p>
<p>
<fig id="j_info1220_fig_005">
<label>Fig. 2</label>
<caption>
<p>Illustration of the majorization concept. The original function <italic>f</italic> is represented with a blue dashed line. The function obtained by majorization at every iteration, <italic>g</italic>, represented as a red dotted line, touches <italic>f</italic> at the supporting point. Taking into account that a new minimum of <italic>g</italic> is obtained at every iteration.</p>
</caption>
<graphic xlink:href="info1220_g005.jpg"/>
</fig>
</p>
</list-item>
<list-item id="j_info1220_li_009">
<label>2.</label>
<p>To find update <inline-formula id="j_info1220_ineq_021"><alternatives><mml:math>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[${x^{t}}$]]></tex-math></alternatives></inline-formula> such that <inline-formula id="j_info1220_ineq_022"><alternatives><mml:math>
<mml:mi mathvariant="italic">g</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">y</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>⩽</mml:mo>
<mml:mi mathvariant="italic">g</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">y</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">y</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$g({x^{t}},y)\leqslant g(y,y)$]]></tex-math></alternatives></inline-formula>.</p>
</list-item>
<list-item id="j_info1220_li_010">
<label>3.</label>
<p>If <inline-formula id="j_info1220_ineq_023"><alternatives><mml:math>
<mml:mi mathvariant="italic">f</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">y</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>−</mml:mo>
<mml:mi mathvariant="italic">f</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>⩾</mml:mo>
<mml:mi mathvariant="italic">ϵ</mml:mi></mml:math><tex-math><![CDATA[$f(y)-f({x^{t}})\geqslant \epsilon $]]></tex-math></alternatives></inline-formula>, then <inline-formula id="j_info1220_ineq_024"><alternatives><mml:math>
<mml:mi mathvariant="italic">y</mml:mi>
<mml:mo>=</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[$y={x^{t}}$]]></tex-math></alternatives></inline-formula> and go to step 2.</p>
</list-item>
</list>
<p>In Algorithm <xref rid="j_info1220_fig_006">4</xref>, all the steps of SMACOF are shown. In such an algorithm, the initial value <inline-formula id="j_info1220_ineq_025"><alternatives><mml:math>
<mml:mi mathvariant="italic">y</mml:mi>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[$y={y_{0}}$]]></tex-math></alternatives></inline-formula> mentioned in step 1 is randomly generated. It has been tested in other works in which SMACOF obtains good results beginning from solutions randomly generated (Orts <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_036">2018</xref>). The stress value of the current mapping is measured and then compared to the stress value of the previous mapping result. Each iteration minimizes the stress value due to the generation of closer solutions to the original. If the difference between the distances is smaller than a fixed threshold value, the algorithm stops (Ekanayake <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_018">2010</xref>), as it is mentioned in step 3. For the sake of simplicity, the details of the Guttman transform, used to update <inline-formula id="j_info1220_ineq_026"><alternatives><mml:math>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[${x^{(t)}}$]]></tex-math></alternatives></inline-formula>, have not been explained here.</p>
<fig id="j_info1220_fig_006">
<label>Algorithm 4</label>
<caption>
<p>SMACOF(<italic>m</italic>, <italic>s</italic>, Δ, <inline-formula id="j_info1220_ineq_027"><alternatives><mml:math>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mi mathvariant="italic">m</mml:mi>
<mml:mi mathvariant="italic">a</mml:mi>
<mml:mi mathvariant="italic">x</mml:mi></mml:math><tex-math><![CDATA[$imax$]]></tex-math></alternatives></inline-formula>, <italic>ϵ</italic>)</p>
</caption>
<graphic xlink:href="info1220_g006.jpg"/>
</fig>
</sec>
<sec id="j_info1220_s_004">
<label>4</label>
<title>Evaluation Results</title>
<fig id="j_info1220_fig_007">
<label>Fig. 3</label>
<caption>
<p>HSIs tested. Pavia city centre (A) and its ground truth (B), Salinas-A (C) and its ground truth (D), and Indian Pines with its ground truth ((E) and (F) respectively).</p>
</caption>
<graphic xlink:href="info1220_g007.jpg"/>
</fig>
<p>Such an investigation methodology has been considered in this work: first, to run Isomap based on SMACOF or eigen-decomposition methods and, after that, to apply a classification process with SVM, KNN or Random Forest classifiers.</p>
<p>Obtained results of Isomap using SMACOF are compared with the obtained results of a recent paper where Isomap considers an eigen-decomposition process (Li <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_030">2017</xref>) in the problem of hyperspectral images reduction. As in Li <italic>et al.</italic> (<xref ref-type="bibr" rid="j_info1220_ref_030">2017</xref>), three popular HSI images collected by the AVIRIS and ROSIS sensors have been considered to test Isomap (see Fig. <xref rid="j_info1220_fig_007">3</xref>). The considered data sets have the following characteristics:</p>
<list>
<list-item id="j_info1220_li_011">
<label>•</label>
<p>Pavia city centre (AVIRIS Salinas Valley, <xref ref-type="bibr" rid="j_info1220_ref_004">2019</xref>), acquired by the ROSIS sensor. Pavia consists of <inline-formula id="j_info1220_ineq_028"><alternatives><mml:math>
<mml:mn>1096</mml:mn>
<mml:mo>×</mml:mo>
<mml:mn>715</mml:mn></mml:math><tex-math><![CDATA[$1096\times 715$]]></tex-math></alternatives></inline-formula> pixels and 102 bands. For the sake of clarity, the data set is reduced to a <inline-formula id="j_info1220_ineq_029"><alternatives><mml:math>
<mml:mn>150</mml:mn>
<mml:mo>×</mml:mo>
<mml:mn>150</mml:mn></mml:math><tex-math><![CDATA[$150\times 150$]]></tex-math></alternatives></inline-formula> pixels subset. However, authors in Li <italic>et al.</italic> (<xref ref-type="bibr" rid="j_info1220_ref_030">2017</xref>) do not detail how they truncate the image in the study. In our work, random subsets of <inline-formula id="j_info1220_ineq_030"><alternatives><mml:math>
<mml:mn>150</mml:mn>
<mml:mo>×</mml:mo>
<mml:mn>150</mml:mn></mml:math><tex-math><![CDATA[$150\times 150$]]></tex-math></alternatives></inline-formula> are collected, keeping the ground truth variety.</p>
</list-item>
<list-item id="j_info1220_li_012">
<label>•</label>
<p>A finer spatial resolution of Salinas (AVIRIS sensor), named Salinas-A. Salinas-A consists of <inline-formula id="j_info1220_ineq_031"><alternatives><mml:math>
<mml:mn>86</mml:mn>
<mml:mo>×</mml:mo>
<mml:mn>83</mml:mn></mml:math><tex-math><![CDATA[$86\times 83$]]></tex-math></alternatives></inline-formula> pixels, which are the <inline-formula id="j_info1220_ineq_032"><alternatives><mml:math>
<mml:mo fence="true" stretchy="false">[</mml:mo>
<mml:mi mathvariant="italic">s</mml:mi>
<mml:mi mathvariant="italic">a</mml:mi>
<mml:mi mathvariant="italic">m</mml:mi>
<mml:mi mathvariant="italic">p</mml:mi>
<mml:mi mathvariant="italic">l</mml:mi>
<mml:mi mathvariant="italic">e</mml:mi>
<mml:mi mathvariant="italic">s</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">l</mml:mi>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mi mathvariant="italic">n</mml:mi>
<mml:mi mathvariant="italic">e</mml:mi>
<mml:mi mathvariant="italic">s</mml:mi>
<mml:mo fence="true" stretchy="false">]</mml:mo></mml:math><tex-math><![CDATA[$[samples,lines]$]]></tex-math></alternatives></inline-formula> <inline-formula id="j_info1220_ineq_033"><alternatives><mml:math>
<mml:mo>=</mml:mo>
<mml:mo fence="true" stretchy="false">[</mml:mo>
<mml:mn>591</mml:mn>
<mml:mo>−</mml:mo>
<mml:mn>676</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>158</mml:mn>
<mml:mo>−</mml:mo>
<mml:mn>240</mml:mn>
<mml:mo fence="true" stretchy="false">]</mml:mo></mml:math><tex-math><![CDATA[$=[591-676,158-240]$]]></tex-math></alternatives></inline-formula> of the original Salinas data set. It contains 204 bands.</p>
</list-item>
<list-item id="j_info1220_li_013">
<label>•</label>
<p>The Indian Pines data set (Aviris, <xref ref-type="bibr" rid="j_info1220_ref_035">2012</xref>) collected by the AVIRIS sensor. It consists of <inline-formula id="j_info1220_ineq_034"><alternatives><mml:math>
<mml:mn>145</mml:mn>
<mml:mo>×</mml:mo>
<mml:mn>145</mml:mn></mml:math><tex-math><![CDATA[$145\times 145$]]></tex-math></alternatives></inline-formula> pixels and, originally, 224 bands. However, 24 bands which contain the information about water absorption are removed in Li <italic>et al.</italic> (<xref ref-type="bibr" rid="j_info1220_ref_030">2017</xref>), so it has 200 bands in the tests.</p>
</list-item>
</list>
<p>Both Isomap versions (SMACOF and eigen-decomposition) have been implemented in Matlab and executed on a cluster composed by 64 cores of Bullx R424-E3 Intel Xeon E5 2650 with 8GB RAM. Specifically, KNN and Dijkstra procedures (Algorithms <xref rid="j_info1220_fig_002">2</xref> and <xref rid="j_info1220_fig_003">3</xref>) have been coded using the Matlab functions <italic>find_nn</italic> and <italic>dijkstra</italic>, respectively. The precision of the classification process is dependent on the considered dimension of low-dimensional space (<italic>s</italic>) on Isomap. Therefore, several dimensions <italic>s</italic> have been taken into account to study their accuracy in the classification. Concretely, we varied the dimension of <italic>s</italic> from 10 to 50, as it was performed in Li <italic>et al.</italic> (<xref ref-type="bibr" rid="j_info1220_ref_030">2017</xref>). The parameter <italic>k</italic>, which describes the number of neighbours handled for each point has been set to 20.</p>
<p>We follow the idea described in Li <italic>et al.</italic> (<xref ref-type="bibr" rid="j_info1220_ref_030">2017</xref>) of considering several classifiers to evaluate the accuracy of both versions of Isomap for HSI classification, such as SVM and KNN classifiers. In addition, we have also considered the Random Forest algorithm. Similarly to Li <italic>et al.</italic> (<xref ref-type="bibr" rid="j_info1220_ref_030">2017</xref>), training and testing data were randomly selected from the ground truth. The <inline-formula id="j_info1220_ineq_035"><alternatives><mml:math>
<mml:mn>20</mml:mn>
<mml:mi mathvariant="normal">%</mml:mi></mml:math><tex-math><![CDATA[$20\% $]]></tex-math></alternatives></inline-formula> of the total pixels of each image were used to train, and the <inline-formula id="j_info1220_ineq_036"><alternatives><mml:math>
<mml:mn>80</mml:mn>
<mml:mi mathvariant="normal">%</mml:mi></mml:math><tex-math><![CDATA[$80\% $]]></tex-math></alternatives></inline-formula> to test. The comparative analysis has been based on the classification accuracy, which is obtained as the ratio: <italic>correctly predicted data</italic>/<italic>total testing data</italic>.</p>
<fig id="j_info1220_fig_008">
<label>Fig. 4</label>
<caption>
<p>Classification results (in terms of accuracy) of the three HSI data sets using SVM: (a) Indian Pines; (b) Salinas-A; (c) Pavia.</p>
</caption>
<graphic xlink:href="info1220_g008.jpg"/>
</fig>
<p>The SVM is coded using LIBSVM described in Chang and Lin (<xref ref-type="bibr" rid="j_info1220_ref_012">2011</xref>) with the following parameters: “<inline-formula id="j_info1220_ineq_037"><alternatives><mml:math>
<mml:mo>−</mml:mo>
<mml:mi mathvariant="italic">t</mml:mi></mml:math><tex-math><![CDATA[$-t$]]></tex-math></alternatives></inline-formula> 2 <inline-formula id="j_info1220_ineq_038"><alternatives><mml:math>
<mml:mo>−</mml:mo>
<mml:mi mathvariant="italic">c</mml:mi></mml:math><tex-math><![CDATA[$-c$]]></tex-math></alternatives></inline-formula> 100” (<inline-formula id="j_info1220_ineq_039"><alternatives><mml:math>
<mml:mo>−</mml:mo>
<mml:mi mathvariant="italic">t</mml:mi></mml:math><tex-math><![CDATA[$-t$]]></tex-math></alternatives></inline-formula> 2 sets the type of kernel function as radial basis function, and <inline-formula id="j_info1220_ineq_040"><alternatives><mml:math>
<mml:mo>−</mml:mo>
<mml:mi mathvariant="italic">c</mml:mi></mml:math><tex-math><![CDATA[$-c$]]></tex-math></alternatives></inline-formula> 100 set the cost parameter to 100). It is not necessary to set the gamma value, <inline-formula id="j_info1220_ineq_041"><alternatives><mml:math>
<mml:mo>−</mml:mo>
<mml:mi mathvariant="italic">g</mml:mi></mml:math><tex-math><![CDATA[$-g$]]></tex-math></alternatives></inline-formula> (a parameter used as input by the radial basis function), as it is automatically set to “<inline-formula id="j_info1220_ineq_042"><alternatives><mml:math>
<mml:mo>−</mml:mo>
<mml:mi mathvariant="italic">g</mml:mi></mml:math><tex-math><![CDATA[$-g$]]></tex-math></alternatives></inline-formula> <inline-formula id="j_info1220_ineq_043"><alternatives><mml:math>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal" stretchy="false">/</mml:mo>
<mml:mi mathvariant="italic">D</mml:mi></mml:math><tex-math><![CDATA[$1/D$]]></tex-math></alternatives></inline-formula>”, where <italic>D</italic> is the dimension. The input data must be transformed following the data preprocessing described in Hsu <italic>et al.</italic> (<xref ref-type="bibr" rid="j_info1220_ref_026">2003</xref>). The results obtained using the SVM are depicted in Fig. <xref rid="j_info1220_fig_008">4</xref>.</p>
<table-wrap id="j_info1220_tab_001">
<label>Table 1</label>
<caption>
<p>Classification results (in terms of accuracy) of the three HSI data sets using KNN for <inline-formula id="j_info1220_ineq_044"><alternatives><mml:math>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="italic">k</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>3</mml:mn></mml:math><tex-math><![CDATA[${k^{\prime }}=1,3$]]></tex-math></alternatives></inline-formula> and 5 and test images Indian Pines, Salinas-A and Pavia.</p>
</caption>
<table>
<thead>
<tr>
<td rowspan="3" style="vertical-align: bottom; text-align: left; border-top: solid thin; border-bottom: solid thin">IMAGE</td>
<td rowspan="3" style="vertical-align: bottom; text-align: left; border-top: solid thin; border-bottom: solid thin"><italic>s</italic></td>
<td colspan="3" style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">SMACOF</td>
<td colspan="3" style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">EIGEN-DECOMPOSITION</td>
</tr>
<tr>
<td colspan="6" style="vertical-align: top; text-align: left; border-bottom: solid thin"><inline-formula id="j_info1220_ineq_045"><alternatives><mml:math>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="italic">k</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[${k^{\prime }}$]]></tex-math></alternatives></inline-formula></td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">1</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">3</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">5</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">1</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">3</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">5</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: left"><bold>Indian Pines</bold></td>
<td style="vertical-align: top; text-align: left"><bold>50</bold></td>
<td style="vertical-align: top; text-align: left"><italic>0.8112</italic></td>
<td style="vertical-align: top; text-align: left">0.7958</td>
<td style="vertical-align: top; text-align: left">0.7943</td>
<td style="vertical-align: top; text-align: left"><italic>0.7250</italic></td>
<td style="vertical-align: top; text-align: left">0.6956</td>
<td style="vertical-align: top; text-align: left">0.6881</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"><bold>40</bold></td>
<td style="vertical-align: top; text-align: left"><italic>0.8046</italic></td>
<td style="vertical-align: top; text-align: left">0.7987</td>
<td style="vertical-align: top; text-align: left">0.7912</td>
<td style="vertical-align: top; text-align: left"><italic>0.7200</italic></td>
<td style="vertical-align: top; text-align: left">0.6965</td>
<td style="vertical-align: top; text-align: left">0.6884</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"><bold>30</bold></td>
<td style="vertical-align: top; text-align: left"><italic>0.8068</italic></td>
<td style="vertical-align: top; text-align: left">0.7849</td>
<td style="vertical-align: top; text-align: left">0.7814</td>
<td style="vertical-align: top; text-align: left"><italic>0.7150</italic></td>
<td style="vertical-align: top; text-align: left">0.6933</td>
<td style="vertical-align: top; text-align: left">0.6893</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"><bold>20</bold></td>
<td style="vertical-align: top; text-align: left"><italic>0.8179</italic></td>
<td style="vertical-align: top; text-align: left">0.8069</td>
<td style="vertical-align: top; text-align: left">0.7845</td>
<td style="vertical-align: top; text-align: left"><italic>0.7150</italic></td>
<td style="vertical-align: top; text-align: left">0.6916</td>
<td style="vertical-align: top; text-align: left">0.6879</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"><bold>10</bold></td>
<td style="vertical-align: top; text-align: left"><italic>0.8090</italic></td>
<td style="vertical-align: top; text-align: left">0.7915</td>
<td style="vertical-align: top; text-align: left">0.7877</td>
<td style="vertical-align: top; text-align: left"><italic>0.7050</italic></td>
<td style="vertical-align: top; text-align: left">0.6896</td>
<td style="vertical-align: top; text-align: left">0.6880</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"><bold>Salinas-A</bold></td>
<td style="vertical-align: top; text-align: left"><bold>50</bold></td>
<td style="vertical-align: top; text-align: left"><italic>0.9946</italic></td>
<td style="vertical-align: top; text-align: left">0.9931</td>
<td style="vertical-align: top; text-align: left">0.9890</td>
<td style="vertical-align: top; text-align: left">0.9899</td>
<td style="vertical-align: top; text-align: left">0.9714</td>
<td style="vertical-align: top; text-align: left">0.9658</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"><bold>40</bold></td>
<td style="vertical-align: top; text-align: left"><italic>0.9952</italic></td>
<td style="vertical-align: top; text-align: left">0.9913</td>
<td style="vertical-align: top; text-align: left">0.9925</td>
<td style="vertical-align: top; text-align: left"><italic>0.9896</italic></td>
<td style="vertical-align: top; text-align: left">0.9733</td>
<td style="vertical-align: top; text-align: left">0.9654</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"><bold>30</bold></td>
<td style="vertical-align: top; text-align: left"><italic>0.9950</italic></td>
<td style="vertical-align: top; text-align: left">0.9935</td>
<td style="vertical-align: top; text-align: left">0.9904</td>
<td style="vertical-align: top; text-align: left"><italic>0.9898</italic></td>
<td style="vertical-align: top; text-align: left">0.9765</td>
<td style="vertical-align: top; text-align: left">0.9645</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"><bold>20</bold></td>
<td style="vertical-align: top; text-align: left"><italic>0.9952</italic></td>
<td style="vertical-align: top; text-align: left">0.9917</td>
<td style="vertical-align: top; text-align: left">0.9914</td>
<td style="vertical-align: top; text-align: left"><italic>0.9892</italic></td>
<td style="vertical-align: top; text-align: left">0.9743</td>
<td style="vertical-align: top; text-align: left">0.9699</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"><bold>10</bold></td>
<td style="vertical-align: top; text-align: left"><italic>0.9963</italic></td>
<td style="vertical-align: top; text-align: left">0.9890</td>
<td style="vertical-align: top; text-align: left">0.9924</td>
<td style="vertical-align: top; text-align: left"><italic>0.9890</italic></td>
<td style="vertical-align: top; text-align: left">0.9765</td>
<td style="vertical-align: top; text-align: left">0.9687</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"><bold>Pavia</bold></td>
<td style="vertical-align: top; text-align: left"><bold>50</bold></td>
<td style="vertical-align: top; text-align: left"><italic>0.9917</italic></td>
<td style="vertical-align: top; text-align: left">0.9503</td>
<td style="vertical-align: top; text-align: left">0.9488</td>
<td style="vertical-align: top; text-align: left"><italic>0.9729</italic></td>
<td style="vertical-align: top; text-align: left">0.9365</td>
<td style="vertical-align: top; text-align: left">0.9211</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"><bold>40</bold></td>
<td style="vertical-align: top; text-align: left"><italic>0.9929</italic></td>
<td style="vertical-align: top; text-align: left">0.9407</td>
<td style="vertical-align: top; text-align: left">0.9463</td>
<td style="vertical-align: top; text-align: left"><italic>0.9720</italic></td>
<td style="vertical-align: top; text-align: left">0.9320</td>
<td style="vertical-align: top; text-align: left">0.9232</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"><bold>30</bold></td>
<td style="vertical-align: top; text-align: left"><italic>0.9940</italic></td>
<td style="vertical-align: top; text-align: left">0.9597</td>
<td style="vertical-align: top; text-align: left">0.9525</td>
<td style="vertical-align: top; text-align: left"><italic>0.9729</italic></td>
<td style="vertical-align: top; text-align: left">0.9365</td>
<td style="vertical-align: top; text-align: left">0.9235</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left"><bold>20</bold></td>
<td style="vertical-align: top; text-align: left"><italic>0.9937</italic></td>
<td style="vertical-align: top; text-align: left">0.9598</td>
<td style="vertical-align: top; text-align: left">0.9526</td>
<td style="vertical-align: top; text-align: left"><italic>0.9735</italic></td>
<td style="vertical-align: top; text-align: left">0.9312</td>
<td style="vertical-align: top; text-align: left">0.9245</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><bold>10</bold></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><italic>0.9934</italic></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.9615</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.9576</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><italic>0.9715</italic></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.9348</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.9234</td>
</tr>
</tbody>
</table>
</table-wrap>
<fig id="j_info1220_fig_009">
<label>Fig. 5</label>
<caption>
<p>Classification results (in terms of accuracy) of the three HSI data sets using 1NN: (a) Indian Pines; (b) Salinas-A; (c) Pavia.</p>
</caption>
<graphic xlink:href="info1220_g009.jpg"/>
</fig>
<fig id="j_info1220_fig_010">
<label>Fig. 6</label>
<caption>
<p>Classification results (in terms of accuracy) of the three HSI data sets using Random Forest: (a) Indian Pines; (b) Salinas-A; (c) Pavia.</p>
</caption>
<graphic xlink:href="info1220_g010.jpg"/>
</fig>
<fig id="j_info1220_fig_011">
<label>Fig. 7</label>
<caption>
<p>Classification results (in terms of accuracy) of the three HSI data sets using 1NN for ranges from 50 to 2: (a) Indian Pines; (b) Salinas-A; (c) Pavia. Solid lines are to guide the eye.</p>
</caption>
<graphic xlink:href="info1220_g011.jpg"/>
</fig>
<p>KNN is a straightforward classification method, however, it is one of the most accurate ones (Keogh and Kasetty, <xref ref-type="bibr" rid="j_info1220_ref_028">2002</xref>; Wei and Keogh, <xref ref-type="bibr" rid="j_info1220_ref_047">2006</xref>). The results of the preliminary analysis of KNN are presented in Table <xref rid="j_info1220_tab_001">1</xref> to consider the most suitable value of the number of neighbours (<inline-formula id="j_info1220_ineq_046"><alternatives><mml:math>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="italic">k</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[${k^{\prime }}$]]></tex-math></alternatives></inline-formula>). This table shows the accuracy of the classification considering several values of <inline-formula id="j_info1220_ineq_047"><alternatives><mml:math>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="italic">k</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[${k^{\prime }}$]]></tex-math></alternatives></inline-formula> (<inline-formula id="j_info1220_ineq_048"><alternatives><mml:math>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>3</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>5</mml:mn></mml:math><tex-math><![CDATA[$1,3,5$]]></tex-math></alternatives></inline-formula>), for every reduced image on both dimensionality reduction methods (eigen-decomposition and SMACOF). Here, the best values are marked in italic style. As it can be observed in the table, the accuracy is reduced as the value of <inline-formula id="j_info1220_ineq_049"><alternatives><mml:math>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="italic">k</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[${k^{\prime }}$]]></tex-math></alternatives></inline-formula> increases and 1NN obtains the best values of accuracy in all analysed cases. Therefore, KNN with <inline-formula id="j_info1220_ineq_050"><alternatives><mml:math>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="italic">k</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn></mml:math><tex-math><![CDATA[${k^{\prime }}=1$]]></tex-math></alternatives></inline-formula> (1NN) will be considered hereinafter. An additional advantage of 1NN is that it does not have tuning parameters and does not require a special transformation of the data or another preprocessing (Xing <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_049">2009</xref>). The Matlab function <italic>fitcknn</italic> has been used to perform KNN.</p>
<p>Apart from the classifiers used in Li <italic>et al.</italic> (<xref ref-type="bibr" rid="j_info1220_ref_030">2017</xref>), the Random Forest algorithm has also been considered in our evaluation (Fig. <xref rid="j_info1220_fig_010">6</xref>). The Matlab function <italic>TreeBagger</italic> has been used to perform Random Forest.</p>
<p>Obtained results with SVM, 1NN and Random Forest can be observed in Figs. <xref rid="j_info1220_fig_008">4</xref>, <xref rid="j_info1220_fig_009">5</xref> and <xref rid="j_info1220_fig_010">6</xref>, respectively. The figures show the accuracy of the classification from the reduced images compared to the ground truth images, for both versions in each range from <inline-formula id="j_info1220_ineq_051"><alternatives><mml:math>
<mml:mi mathvariant="italic">s</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>10</mml:mn></mml:math><tex-math><![CDATA[$s=10$]]></tex-math></alternatives></inline-formula> to <inline-formula id="j_info1220_ineq_052"><alternatives><mml:math>
<mml:mi mathvariant="italic">s</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>50</mml:mn></mml:math><tex-math><![CDATA[$s=50$]]></tex-math></alternatives></inline-formula>. Such results have shown that the use of SMACOF improves the accuracy of Isomap for the three tested classifiers. In comparison with the version based on the eigen-decomposition process, the SMACOF approach is able to achieve better accuracies which involves a more optimized classification of HSI data sets.</p>
<p>Once it is proven that the SMACOF approach is more accurate than the eigen-decomposition process, the global precision of Isomap with SMACOF has been tested in a more extended range of the values of <italic>s</italic> than (Li <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1220_ref_030">2017</xref>) using 1NN (see Fig. <xref rid="j_info1220_fig_011">7</xref>). In this figure, it can be observed that the classification accuracy is quite high for all the analysed dimensionality reduction cases (from 50 to 2). However, it should be noted that the classification accuracy slightly decreases among the range from 9 to 2. Thus, we can conclude that SMACOF achieves a good accuracy even for the significant dimensionality reduction.</p>
</sec>
<sec id="j_info1220_s_005">
<label>5</label>
<title>Conclusions</title>
<p>In this paper, our intention was to improve the accuracy of Isomap algorithm in the analysis of hyperspectral images. To achieve this, Isomap has been based on SMACOF, which is the most accurate MDS method, instead of classical scaling such as eigen-decomposition process.</p>
<p>The proposed version of Isomap based on SMACOF has been experimentally compared to a state-of-the-art version with an eigen-decomposition process. For that, well-known hyperspectral images taken from airbornes or satellites have been considered (Indian Pines, Salinas-A and Pavia Center). Moreover, a classification process using several classifiers (SVM, KNN and Random Forest) has been carried out to determine the accuracy of every test image with every method (SMACOF of eigen-decomposition). Obtained results have shown that the use of SMACOF improves the accuracy of Isomap in the reduction of the hyperspectral images for all studied cases.</p>
<p>In this work, only one criteria, the classification accuracy, is considered when reducing dimensions of the hyperspectral images. However, it should be noted that the drawbacks of Isomap and SMACOF are high consumptions of time and resources. Therefore, to decrease these aspects could be very valuable to make their application more approachable. Consequently, our current and future work is focused on the implementation of a GPU version of Isomap based on SMACOF.</p>
</sec>
</body>
<back>
<ack id="j_info1220_ack_001">
<title>Acknowledgements</title>
<p>This work has been supported by the Spanish Science and Technology Commission (CICYT) under contract TIN2015-66680; Junta de Andalucía under contract P12-TIC-301 in part financed by the European Regional Development Fund (ERDF). G. Ortega is a fellow of the Spanish ‘Juan de la Cierva Incorporación’ program.</p></ack>
<ref-list id="j_info1220_reflist_001">
<title>References</title>
<ref id="j_info1220_ref_001">
<mixed-citation publication-type="journal"><string-name><surname>Almeida</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Logrado</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Zacca</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Correa</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Poppi</surname>, <given-names>R.</given-names></string-name> (<year>2017</year>). <article-title>Raman hyperspectral imaging in conjunction with independent component analysis as a forensic tool for explosive analysis: the case of an ATM explosion</article-title>. <source>Talanta</source>, <volume>174</volume>, <fpage>628</fpage>–<lpage>632</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_002">
<mixed-citation publication-type="journal"><string-name><surname>Altman</surname>, <given-names>N.</given-names></string-name> (<year>1992</year>). <article-title>An introduction to kernel and nearest-neighbor nonparametric regression</article-title>. <source>The American Statistician</source>, <volume>46</volume>(<issue>3</issue>), <fpage>175</fpage>–<lpage>185</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_003">
<mixed-citation publication-type="journal"><string-name><surname>Asaari</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Mishra</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Mertens</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Dhondt</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Inzé</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Wuyts</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Scheunders</surname>, <given-names>P.</given-names></string-name> (<year>2018</year>). <article-title>Close-range hyperspectral image analysis for the early detection of stress responses in individual plants in a high-throughput phenotyping platform</article-title>. <source>ISPRS Journal of Photogrammetry and Remote Sensing</source>, <volume>138</volume>, <fpage>121</fpage>–<lpage>138</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_004">
<mixed-citation publication-type="other"><string-name><surname>AVIRIS Salinas Valley</surname></string-name> (2019). Rosis pavia university hyperspectral datasets.</mixed-citation>
</ref>
<ref id="j_info1220_ref_005">
<mixed-citation publication-type="chapter"><string-name><surname>Bengio</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Paiement</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Vincent</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Delalleau</surname>, <given-names>O.</given-names></string-name>, <string-name><surname>Roux</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Ouimet</surname>, <given-names>M.</given-names></string-name> (<year>2004</year>). <chapter-title>Out-of-sample extensions for LLE, Isomap, MDS, eigenmaps, and spectral clustering</chapter-title>. In: <source>Advances in Neural Information Processing Systems</source>, pp. <fpage>177</fpage>–<lpage>184</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_006">
<mixed-citation publication-type="journal"><string-name><surname>Bernatavičienė</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Dzemyda</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Marcinkevičius</surname>, <given-names>V.</given-names></string-name> (<year>2007</year>). <article-title>Conditions for optimal efficiency of relative MDS</article-title>. <source>Informatica</source>, <volume>18</volume>(<issue>2</issue>), <fpage>187</fpage>–<lpage>202</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_007">
<mixed-citation publication-type="book"><string-name><surname>Borengasser</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Hungate</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Watkins</surname>, <given-names>R.</given-names></string-name> (<year>2007</year>). <source>Hyperspectral Remote Sensing: Principles and Applications</source>. <publisher-name>CRC Press</publisher-name>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_008">
<mixed-citation publication-type="book"><string-name><surname>Borg</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Groenen</surname>, <given-names>P.</given-names></string-name> (<year>2005</year>). <source>Modern Multidimensional Scaling: Theory and Applications</source>. <publisher-name>Springer Science &amp; Business Media</publisher-name>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_009">
<mixed-citation publication-type="journal"><string-name><surname>Breiman</surname>, <given-names>L.</given-names></string-name> (<year>2001</year>). <article-title>Random forests</article-title>. <source>Machine Learning</source>, <volume>45</volume>(<issue>1</issue>), <fpage>5</fpage>–<lpage>32</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_010">
<mixed-citation publication-type="journal"><string-name><surname>Bruce</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Koger</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Li</surname>, <given-names>J.</given-names></string-name> (<year>2002</year>). <article-title>Dimensionality reduction of hyperspectral data using discrete wavelet transform feature extraction</article-title>. <source>IEEE Transactions on Geoscience and Remote Sensing</source>, <volume>40</volume>(<issue>10</issue>), <fpage>2331</fpage>–<lpage>2338</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_011">
<mixed-citation publication-type="journal"><string-name><surname>Chang</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Ren</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Chiang</surname>, <given-names>S.</given-names></string-name> (<year>2001</year>). <article-title>Real-time processing algorithms for target detection and classification in hyperspectral imagery</article-title>. <source>IEEE Transactions on Geoscience and Remote Sensing</source>, <volume>39</volume>(<issue>4</issue>), <fpage>760</fpage>–<lpage>768</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_012">
<mixed-citation publication-type="journal"><string-name><surname>Chang</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Lin</surname>, <given-names>C.</given-names></string-name> (<year>2011</year>). <article-title>Libsvm: a library for support vector machines</article-title>. <source>ACM Transactions on Intelligent Systems and Technology (TIST)</source>, <volume>2</volume>(<issue>3</issue>), <fpage>27</fpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_013">
<mixed-citation publication-type="journal"><string-name><surname>Cortes</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Vapnik</surname>, <given-names>V.</given-names></string-name> (<year>1995</year>). <article-title>Support-vector networks</article-title>. <source>Machine Learning</source>, <volume>20</volume>(<issue>3</issue>), <fpage>273</fpage>–<lpage>297</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_014">
<mixed-citation publication-type="other"><string-name><surname>De Leeuw</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Mair</surname>, <given-names>P.</given-names></string-name> (2011). <italic>Multidimensional Scaling Using Majorization: Smacof in R</italic>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_015">
<mixed-citation publication-type="journal"><string-name><surname>Deng</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Chen</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Zhang</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Mahadevan</surname>, <given-names>S.</given-names></string-name> (<year>2012</year>). <article-title>Fuzzy Dijkstra algorithm for shortest path problem under uncertain environment</article-title>. <source>Applied Soft Computing</source>, <volume>12</volume>(<issue>3</issue>), <fpage>1231</fpage>–<lpage>1237</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_016">
<mixed-citation publication-type="journal"><string-name><surname>Dijkstra</surname>, <given-names>E.</given-names></string-name> (<year>1959</year>). <article-title>A note on two problems in connexion with graphs</article-title>. <source>Numerische Mathematik</source>, <volume>1</volume>(<issue>1</issue>), <fpage>269</fpage>–<lpage>271</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_017">
<mixed-citation publication-type="book"><string-name><surname>Dzemyda</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Kurasova</surname>, <given-names>O.</given-names></string-name>, <string-name><surname>Žilinskas</surname>, <given-names>J.</given-names></string-name> (<year>2013</year>). <source>Multidimensional Data Visualization: Methods and Applications</source>. <publisher-name>Springer</publisher-name>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_018">
<mixed-citation publication-type="chapter"><string-name><surname>Ekanayake</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Li</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Zhang</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Gunarathne</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Bae</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Qiu</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Fox</surname>, <given-names>G.</given-names></string-name> (<year>2010</year>). <chapter-title>Twister: a runtime for iterative mapreduce</chapter-title>. In: <source>Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing</source>. <publisher-name>ACM</publisher-name>, pp. <fpage>810</fpage>–<lpage>818</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_019">
<mixed-citation publication-type="journal"><string-name><surname>Filatovas</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Podkopaev</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Kurasova</surname>, <given-names>O.</given-names></string-name> (<year>2015</year>). <article-title>A visualization technique for accessing solution pool in interactive methods of multiobjective optimization</article-title>. <source>International Journal of Computers Communications &amp; Control</source>, <volume>10</volume>(<issue>4</issue>), <fpage>508</fpage>–<lpage>519</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_020">
<mixed-citation publication-type="journal"><string-name><surname>Fletcher</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Galiauskas</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Zilinskas</surname>, <given-names>J.</given-names></string-name> (<year>2014</year>). <article-title>Quadratic programming with complementarity constraints for multidimensional scaling with city-block distances</article-title>. <source>Baltic Journal of Modern Computing</source>, <volume>2</volume>(<issue>4</issue>), <fpage>248</fpage>–<lpage>259</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_021">
<mixed-citation publication-type="book"><string-name><surname>Granato</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Ares</surname>, <given-names>G.</given-names></string-name> (<year>2014</year>). <source>Mathematical and Statistical Methods in Food Science and Technology</source>. <publisher-name>John Wiley &amp; Sons</publisher-name>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_022">
<mixed-citation publication-type="other"><string-name><surname>Green</surname>, <given-names>P.</given-names></string-name> (1975). Marketing applications of MDS: assessment and outlook. <italic>The Journal of Marketing</italic>, 24–31.</mixed-citation>
</ref>
<ref id="j_info1220_ref_023">
<mixed-citation publication-type="journal"><string-name><surname>Groenen</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Mathar</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Heiser</surname>, <given-names>W.</given-names></string-name> (<year>1995</year>). <article-title>The majorization approach to multidimensional scaling for Minkowski distances</article-title>. <source>Journal of Classification</source>, <volume>12</volume>(<issue>1</issue>), <fpage>3</fpage>–<lpage>19</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_024">
<mixed-citation publication-type="journal"><string-name><surname>Gupta</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Ray</surname>, <given-names>I.</given-names></string-name> (<year>2015</year>). <article-title>Cryptographically significant MDS matrices based on circulant and circulant-like matrices for lightweight applications</article-title>. <source>Cryptography and Communications</source>, <volume>7</volume>(<issue>2</issue>), <fpage>257</fpage>–<lpage>287</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_025">
<mixed-citation publication-type="journal"><string-name><surname>Harsanyi</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Chang</surname>, <given-names>C.</given-names></string-name> (<year>1994</year>). <article-title>Hyperspectral image classification and dimensionality reduction: an orthogonal subspace projection approach</article-title>. <source>IEEE Transactions on Geoscience and Remote Sensing</source>, <volume>32</volume>(<issue>4</issue>), <fpage>779</fpage>–<lpage>785</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_026">
<mixed-citation publication-type="other"><string-name><surname>Hsu</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Chang</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Lin</surname>, <given-names>C.</given-names></string-name> (2003). A practical guide to support vector classification, pp. 1–16.</mixed-citation>
</ref>
<ref id="j_info1220_ref_027">
<mixed-citation publication-type="journal"><string-name><surname>Ingram</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Munzner</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Olano</surname>, <given-names>M.</given-names></string-name> (<year>2009</year>). <article-title>Glimmer: multilevel MDS on the GPU</article-title>. <source>IEEE Transactions on Visualization and Computer Graphics</source>, <volume>15</volume>(<issue>2</issue>), <fpage>249</fpage>–<lpage>261</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_028">
<mixed-citation publication-type="chapter"><string-name><surname>Keogh</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Kasetty</surname>, <given-names>S.</given-names></string-name> (<year>2002</year>). <chapter-title>On the need for time series data mining benchmarks: a survey and empirical demonstration</chapter-title>. In: <source>Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</source>, pp. <fpage>102</fpage>–<lpage>111</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_029">
<mixed-citation publication-type="other"><string-name><surname>Leavesley</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Deal</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Hill</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Martin</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Lall</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Lopez</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Boudreaux</surname>, <given-names>C.</given-names></string-name> (2018). Colorectal cancer detection by hyperspectral imaging using fluorescence excitation scanning. <italic>Optical Biopsy XVI: Toward Real-Time Spectroscopic Imaging and Diagnosis</italic>, 10489.</mixed-citation>
</ref>
<ref id="j_info1220_ref_030">
<mixed-citation publication-type="journal"><string-name><surname>Li</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Zhang</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Zhang</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Du</surname>, <given-names>B.</given-names></string-name> (<year>2017</year>). <article-title>GPU parallel implementation of isometric mapping for hyperspectral classification</article-title>. <source>IEEE Geoscience and Remote Sensing Letters</source>, <volume>14</volume>(<issue>9</issue>), <fpage>1532</fpage>–<lpage>1536</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_031">
<mixed-citation publication-type="chapter"><string-name><surname>Lim</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>de Heras</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Sarni</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Thalmann</surname>, <given-names>D.</given-names></string-name> (<year>2003</year>). <chapter-title>Planar arrangement of high-dimensional biomedical data sets by Isomap coordinates</chapter-title>. In: <source>16th IEEE Symposium Computer-Based Medical Systems</source>, pp. <fpage>50</fpage>–<lpage>55</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_032">
<mixed-citation publication-type="journal"><string-name><surname>Mairal</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Bach</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Ponce</surname>, <given-names>P.</given-names></string-name> (<year>2014</year>). <article-title>Sparse modeling for image and vision processing</article-title>. <source>Foundations and Trends in Computer Graphics and Vision</source>, <volume>8</volume>(<issue>2–3</issue>), <fpage>85</fpage>–<lpage>283</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_033">
<mixed-citation publication-type="journal"><string-name><surname>Manolakis</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Marden</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Shaw</surname>, <given-names>G.</given-names></string-name> (<year>2003</year>). <article-title>Hyperspectral image processing for automatic target detection applications</article-title>. <source>Lincoln Laboratory Journal</source>, <volume>14</volume>(<issue>1</issue>), <fpage>79</fpage>–<lpage>116</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_034">
<mixed-citation publication-type="journal"><string-name><surname>Medvedev</surname>, <given-names>V.</given-names></string-name>, <string-name><surname>Kurasova</surname>, <given-names>O.</given-names></string-name>, <string-name><surname>Bernatavičienė</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Treigys</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Marcinkevičius</surname>, <given-names>V.</given-names></string-name>, <string-name><surname>Dzemyda</surname>, <given-names>G.</given-names></string-name> (<year>2017</year>). <article-title>A new web-based solution for modelling data mining processes</article-title>. <source>Simulation Modelling Practice and Theory</source>, <volume>76</volume>, <fpage>34</fpage>–<lpage>46</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_035">
<mixed-citation publication-type="other"><string-name><surname>Aviris</surname>, <given-names>N.W.</given-names></string-name> (2012). <italic>Indianas indian pines 1992 data set</italic>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_036">
<mixed-citation publication-type="other"><string-name><surname>Orts</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Filatovas</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Ortega</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Kurasova</surname>, <given-names>O.</given-names></string-name>, <string-name><surname>Garzón</surname>, <given-names>E.M.</given-names></string-name> (2018). Improving the energy efficiency of smacof for multidimensional scaling on modern architectures. <italic>The Journal of Supercomputing</italic>, 1–13.</mixed-citation>
</ref>
<ref id="j_info1220_ref_037">
<mixed-citation publication-type="journal"><string-name><surname>Plaza</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Martinez</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Plaza</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Perez</surname>, <given-names>R.</given-names></string-name> (<year>2005</year>). <article-title>Dimensionality reduction and classification of hyperspectral image data using sequences of extended morphological transformations</article-title>. <source>IEEE Transactions on Geoscience and Remote Sensing</source>, <volume>43</volume>(<issue>3</issue>), <fpage>466</fpage>–<lpage>479</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_038">
<mixed-citation publication-type="chapter"><string-name><surname>Pulkkinen</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Roos</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Myllymaki</surname>, <given-names>P.</given-names></string-name> (<year>2011</year>). <chapter-title>Semi-supervised learning for wlan positioning</chapter-title>. In: <source>International Conference on Artificial Neural Networks</source>. <publisher-name>Springer</publisher-name>, pp. <fpage>355</fpage>–<lpage>362</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_039">
<mixed-citation publication-type="journal"><string-name><surname>Rizzo</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Carpentieri</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Motta</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Storer</surname>, <given-names>J.</given-names></string-name> (<year>2005</year>). <article-title>Low-complexity lossless compression of hyperspectral imagery via linear prediction</article-title>. <source>IEEE Signal Processing Letters</source>, <volume>12</volume>(<issue>2</issue>), <fpage>138</fpage>–<lpage>141</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_040">
<mixed-citation publication-type="other"><string-name><surname>Rosenberg</surname>, <given-names>S.</given-names></string-name> (2014). The method of sorting in multivariate research with applications selected from cognitive psychology and person perception. <italic>Multivariate Applications in the Social Sciences</italic>, 123–148.</mixed-citation>
</ref>
<ref id="j_info1220_ref_041">
<mixed-citation publication-type="journal"><string-name><surname>Sibson</surname>, <given-names>R.</given-names></string-name> (<year>1979</year>). <article-title>Studies in the robustness of multidimensional scaling: perturbational analysis of classical scaling</article-title>. <source>Journal of the Royal Statistical Society, Series B</source>, <volume>Methodological</volume>, <fpage>217</fpage>–<lpage>229</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_042">
<mixed-citation publication-type="other"><string-name><surname>Tay</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Hyun</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Oh</surname>, <given-names>S.</given-names></string-name> (2014). A machine learning approach for specification of spinal cord injuries using fractional anisotropy values obtained from diffusion tensor images. <italic>Computational and Mathematical Methods in Medicine</italic>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_043">
<mixed-citation publication-type="journal"><string-name><surname>Tenenbaum</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>De Silva</surname>, <given-names>V.</given-names></string-name>, <string-name><surname>Langford</surname>, <given-names>J.</given-names></string-name> (<year>2000</year>). <article-title>A global geometric framework for nonlinear dimensionality reduction</article-title>. <source>Science</source>, <volume>290</volume>(<issue>5500</issue>), <fpage>2319</fpage>–<lpage>2323</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_044">
<mixed-citation publication-type="journal"><string-name><surname>Virlet</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Sabermanesh</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Sadeghi-Tehran</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Hawkesford</surname>, <given-names>M.</given-names></string-name> (<year>2017</year>). <article-title>Field Scanalyzer: An automated robotic field phenotyping platform for detailed crop monitoring</article-title>. <source>Functional Plant Biology</source>, <volume>44</volume>(<issue>1</issue>), <fpage>143</fpage>–<lpage>153</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_045">
<mixed-citation publication-type="journal"><string-name><surname>Wang</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Chang</surname>, <given-names>C.</given-names></string-name> (<year>2006</year>). <article-title>Independent component analysis-based dimensionality reduction with applications in hyperspectral image analysis</article-title>. <source>IEEE Transactions on Geoscience and Remote Sensing</source>, <volume>44</volume>(<issue>6</issue>), <fpage>1586</fpage>–<lpage>1600</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_046">
<mixed-citation publication-type="journal"><string-name><surname>Wang</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Zhang</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Liu</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Choo</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Huang</surname>, <given-names>F.</given-names></string-name> (<year>2017</year>). <article-title>Spectralspatial multi-feature-based deep learning for hyperspectral remote sensing image classification</article-title>. <source>Soft Computing</source>, <volume>21</volume>(<issue>1</issue>), <fpage>213</fpage>–<lpage>221</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_047">
<mixed-citation publication-type="chapter"><string-name><surname>Wei</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Keogh</surname>, <given-names>E.</given-names></string-name> (<year>2006</year>). <chapter-title>Semisupervised time series classification</chapter-title>. In: <source>KDD 2006</source>, pp. <fpage>748</fpage>–<lpage>753</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_048">
<mixed-citation publication-type="chapter"><string-name><surname>Wu</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Chan</surname>, <given-names>K.</given-names></string-name> (<year>2004</year>). <chapter-title>An extended Isomap algorithm for learning multi-class manifold</chapter-title>. In: <source>Proceedings of 2004 International Conference IEEE Machine Learning and Cybernetics</source>, Vol. <volume>6</volume>, pp. <fpage>3429</fpage>–<lpage>3433</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_049">
<mixed-citation publication-type="chapter"><string-name><surname>Xing</surname>, <given-names>Z.</given-names></string-name>, <string-name><surname>Pei</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Philip</surname>, <given-names>S.</given-names></string-name> (<year>2009</year>). <chapter-title>Early prediction on time series: a nearest neighbor approach</chapter-title>. In: <source>IJCAI</source>, pp. <fpage>1297</fpage>–<lpage>1302</lpage>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_050">
<mixed-citation publication-type="chapter"><string-name><surname>Yang</surname>, <given-names>M.</given-names></string-name> (<year>2002</year>a). <chapter-title>Face recognition using extended Isomap</chapter-title>. In: <source>IEEE Proceedings 2002 International Conference</source>.</mixed-citation>
</ref>
<ref id="j_info1220_ref_051">
<mixed-citation publication-type="chapter"><string-name><surname>Yang</surname>, <given-names>M.</given-names></string-name> (<year>2002</year>b). <chapter-title>Extended Isomap for pattern classification</chapter-title>. In: <source>Proceedings of the Eighteenth National Conference on Artificial Intelligence and Fourteenth Conference on Innovative Applications of Artificial Intelligence</source>, pp. <fpage>224</fpage>–<lpage>229</lpage>.</mixed-citation>
</ref>
</ref-list>
</back>
</article>