Model assessment - Archive ouverte HAL Access content directly
Conference Papers Year :

Model assessment

(1) , (2)
1
2

Abstract

In data mining and machine learning, models come from data and provide insights for understanding data (unsupervised classification) or making prediction (supervised learning) (Giudici, 2003, Hand, 2000). Thus the scientific status of this kind of models is different from the classical view where a model is a simplified representation of reality provided by an expert of the field. In most data mining applications a good model is a model which not only fits the data but gives good predictions, even if it is not interpretable (Vapnik, 2006). In this context, model validation and model choice need specific indices and approaches. Penalized likelihood measures (AIC, BIC etc.) may not be pertinent when there is no simple distributional assumption on the data and (or) for models like regularized regression, SVM and many others where parameters are constrained. Complexity measures like the VC-dimension are more adapted, but very difficult to estimate. In supervised classification, ROC curves and AUC are commonly used (Saporta & Niang, 2006). Comparing models should be done on validation (hold-out) sets but resampling is necessary in order to get confidence intervals.
Fichier principal
Vignette du fichier
RC1059.pdf (262.89 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-02507614 , version 1 (13-03-2020)

Identifiers

  • HAL Id : hal-02507614 , version 1

Cite

Gilbert Saporta, Ndèye Niang. Model assessment. KNEMO'06 Knowledge Extraction and Modeling, Jan 2006, Capri, Italy. ⟨hal-02507614⟩

Collections

CNAM CEDRIC-CNAM
24 View
93 Download

Share

Gmail Facebook Twitter LinkedIn More