Novelty detection, one-class classification, or outlier detection, is typically employed for analysing signals when few examples of "abnormal" data are available, such that a multi-class approach cannot be taken. Multivariate, multimodal density estimation can be used to construct a model of the distribution of normal data. However, setting a decision boundary such that test data can be classified "normal" or "abnormal" with respect to the model of normality is typically performed using heuristic methods, such as thresholding the unconditional data density, p(x). This paper describes two principled methods of setting a decision boundary based on extreme value statistics: (i) a numerical method that produces an "optimal" solution, and (ii) an analytical approximation in closed form. We compare the performance of both approaches using large datasets from biomedical patient monitoring and jet engine health monitoring, and conclude that the analytical approach performs novelty detection as successfully as the "optimal" numerical approach, both of which outperform the conventional method. © 2009 IEEE.

Original publication




Conference paper

Publication Date



13 - 16