The aim of this paper is to present a strategy by which a new philosophy for pattern classification, namely that pertaining to Dissimilarity-Based Classifiers (DBCs), can be efficiently implemented. This methodology, proposed by Duin<sup>1</sup> and his co-authors (see [3], [4], [5], [6], [8]), is a way of defining classifiers between the classes, and is not based on the feature measurements of the individual patterns, but rather on a suitable dissimilarity measure between them. The problem with this strategy is, however, the need to compute, store and process the inter-pattern dissimilarities for all the training samples, and thus, the accuracy of the classifier designed in the dissimilarity space is dependent on the methods used to achieve this. In this paper, we suggest a novel strategy to enhance the computation for all families of DBCs. Rather than compute, store and process the DBC based on the entire data set, we advocate that the training set be first reduced into a smaller representative subset. Also, rather than determine this subset on the basis of random selection, or clustering etc., we advocate the use of a Prototype Reduction Scheme (PRS), whose output yields the points to be utilized by the DBC. Apart from utilizing PRSs, in the paper we also propose simultaneously employing the Mahalanobis distance as the dissimilarity-measurement criterion to increase the DBC's classification accuracy. Our experimental results demonstrate that the proposed mechanism increases the classification accuracy when compared with the "conventional" approaches for samples involving real-life as well as artificial data sets.

Additional Metadata
Series Lecture Notes in Computer Science
Kim, S.-W. (Sang-Woon), & Oommen, J. (2006). On optimizing dissimilarity-based classification using prototype reduction schemes. In Lecture Notes in Computer Science.