Various prototype reduction schemes have been reported in the literature. Foremost among these are the prototypes for nearest neighbor (PNN), the vector quantization (VQ), and the support vector machines (SVM) methods. In this paper, we shall show that these schemes can be enhanced bythe introduction of a post-processing phase that is related, but not identical to, the LVQ3 process. Although the post-processing with LVQ3 has been reported for the SOM and the basic VQ methods, in this paper, we shall show that an analogous philosophycan be used in conjunction with the SVM and PNN rules. Our essential modification to LVQ3 first entails a partitioning of the respective training sets into two sets called the Placement set and the Optimizing set, which are instrumental in determining the LVQ3 parameters. Such a partitioning is novel to the literature. Our experimental results demonstrate that the proposed enhancement yields the best reported prototype condensation scheme to-date for both artificial data sets, and for samples involving real-life data sets.

Additional Metadata
Keywords CNN (condensed nearest neighbor), LVQ (learning vector quantization), PNN (prototypes for nearest neighbor classifier), Prototype reduction, SVM (support vector machines), VQ (vector quantization)
Persistent URL
Journal Pattern Recognition
Kim, S.-W. (Sang-Woon), & Oommen, J. (2003). Enhancing prototype reduction schemes with LVQ3-type algorithms. Pattern Recognition, 36(5), 1083–1093. doi:10.1016/S0031-3203(02)00115-2