Journal:Informatica
Volume 22, Issue 1 (2011), pp. 43–56
Abstract
It is well known that in situations involving the study of large datasets where influential observations or outliers maybe present, regression models based on the Maximum Likelihood criterion are likely to be unstable. In this paper we investigate the use of the Minimum Density Power Divergence criterion as a practical tool for parametric regression models building. More precisely, we suggest a procedure relying on an index of similarity between estimated regression models and on a Monte Carlo Significance test of hypothesis that allows to check the existence of outliers in the data and therefore to choose the best tuning constant for the Minimum Density Power Divergence estimators. Theory is outlined, numerical examples featuring several experimental scenarios are provided and main results of a simulation study aiming to verify the goodness of the procedure are supplied.