Improving Lazy Attribute Selection
Keywords:
Attribute Selection, Classification, Lazy LearningAbstract
Attribute selection is a data preprocessing step which aims at identifying relevant attributes for a target data mining task - specifically in this article, the classification task. Previously, we have proposed a new attribute selection strategy - based on a lazy learning approach - which postpones the identification of relevant attributes until an instance is submitted for classification. Experimental results showed the effectiveness of the technique, as in most cases it improved the accuracy of classification, when compared with the analogous eager attribute selection approach performed as a data preprocessing step. However, in the previously proposed approach, the performance of the classifier depends on the number of attributes selected, which is a user-defined parameter. In practice, it may be difficult to select a proper value for this parameter, that is, the value that produces the best performance for the classification task. In this article, aiming to overcome this drawback, we propose two approaches to be used coupled with %the previously proposed lazy attribute selection technique: one that tries to identify, in a wrapper-based manner, the appropriate number of attributes to be selected and another that combines, in a voting approach, different numbers of attributes. Experimental results show the effectiveness of the proposed techniques. The assessment of these approaches confirms that the lazy learning paradigm can be compatible with traditional methods and appropriate for a large number of applications.Downloads
Additional Files
Published
2011-09-13
Issue
Section
SBBD Articles