跳到主要內容區塊
:::
A- A A+

演講公告

:::

Multi-scale Smoothing in Nonparametric Classification

Abstract

Performance of a nonparametric classifier depends on the values of associated smoothing parameters. In practice, these parameters are estimated by cross-validation techniques, and that estimated level of smoothing is used for classifying all future observations. However, in a classification problem, the optimum value of the smoothing parameter of a population density estimate not only depends on the population itself, but it may also vary significantly depending on its competing population densities and their prior probabilities. Moreover, a good choice of smoothing parameter depends on the specific observation to be classified in addition to depending on the entire training data set. Therefore, instead of using a single value of the smoothing parameter, it is more useful in practice to consider different scales of smoothing for each class. In this talk, we describe such a multi-scale approach along with a graphical device that leads to a more informative classification algorithm. Often in the presence of several competing classes, it becomes computationally difficult to use cross-validation techniques for simultaneous estimation of multiple smoothing parameters. In such cases, we use pairwise classification technique, which not only reduces the computational burden but also provides the flexibility of using different smoothing parameters for a class density estimate when it is compared with different competing class densities. This multi-scale smoothing technique in classification is demonstrated using kernel density estimates and nearest neighbor classifiers. In both the cases, we propose a simple aggregation technique that leads to better performance both in terms of visualization and misclassification rates on some well known benchmark data sets. Joint work with Prof. Probal Chaudhuri, Prof. Debasis Sengupta and Prof. C.A. Murthy of Indian Statistical Institute, Calcutta.

最後更新日期:
回頁首