<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd"><article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article"><front><journal-meta><journal-id journal-id-type="publisher-id">INFORMATICA</journal-id><journal-title-group><journal-title>Informatica</journal-title></journal-title-group><issn pub-type="epub">0868-4952</issn><issn pub-type="ppub">0868-4952</issn><publisher><publisher-name>VU</publisher-name></publisher></journal-meta><article-meta><article-id pub-id-type="publisher-id">INF43-410</article-id><article-id pub-id-type="doi">10.3233/INF-1993-43-410</article-id><article-categories><subj-group subj-group-type="heading"><subject>Research article</subject></subj-group></article-categories><title-group><article-title>On shape of pattern error function, initializations and intrinsic dimensionality in ANN classifier design</article-title></title-group><contrib-group><contrib contrib-type="Author"><name><surname>Raudys</surname><given-names>Šarūnas</given-names></name><xref ref-type="aff" rid="j_INFORMATICA_aff_000"/></contrib><aff id="j_INFORMATICA_aff_000">Institute of Mathematics and Informatics, 2600 Vilnius, Akademijos 8t.4, Lithuania</aff></contrib-group><pub-date pub-type="epub"><day>01</day><month>01</month><year>1993</year></pub-date><volume>4</volume><issue>3-4</issue><fpage>360</fpage><lpage>383</lpage><abstract><p>An analytical equation for a generalization error of minimum empirical error classifier is derived for a case when true classes are spherically Gaussian. It is compared with the generalization error of a mean squared error classifier – a standard Fisher linear discriminant function. In a case of spherically distributed classes the generalization error depends on a distance between the classes and a number of training samples. It depends on an intrinsic dimensionality of a data only via initialization of a weight vector. If initialization is successful the dimensionality does not effect the generalization error. It is concluded advantageous conditions to use artificial neural nets are to classify patterns in a changing environment, when intrinsic dimensionality of the data is low or when the number of training sample vectors is really large.</p></abstract><kwd-group><label>Keywords</label><kwd>feed forward neural nets</kwd><kwd>training sample size</kwd><kwd>generalization</kwd><kwd>intrinsic dimensionality</kwd><kwd>initialization</kwd><kwd>insufficient learning</kwd></kwd-group></article-meta></front></article>