In machine learning research and application, multiclass classification algorithms reign supreme. Their fundamental property is the reliance on the availability of data from all known categories to induce effective classifiers. Unfortunately, data from so-called real-world domains sometimes do not satisfy this property, and researchers use methods such as sampling to make the data more conducive for classification. However, there are scenarios in which even such explicit methods to rectify distributions fail. In such cases, 1-class classification algorithms become the practical alternative. Unfortunately, domain complexity severely impacts their ability to produce effective classifiers. The work in this article addresses this issue and develops a strategy that allows for 1-class classification over complex domains. In particular, we introduce the notion of learning along the lines of underlying domain concepts; an important source of complexity in domains is the presence of subconcepts, and by learning over them explicitly rather than on the entire domain as a whole, we can produce powerful 1-class classification systems. The level of knowledge regarding these subconcepts will naturally vary by domain, and thus, we develop 3 distinct methodologies that take the amount of domain knowledge available into account. We demonstrate these over 3 real-world domains.

Additional Metadata
Keywords 1-class classification, anomaly detection, classification, machine learning
Persistent URL dx.doi.org/10.1111/coin.12128
Journal Computational Intelligence
Citation
Sharma, S. (Shiven), Somayaji, A, & Japkowicz, N. (Nathalie). (2018). Learning over subconcepts: Strategies for 1-class classification. Computational Intelligence, 34(2), 440–467. doi:10.1111/coin.12128