跳到主要內容區塊
:::
A- A A+

演講公告

:::

High-dimensional modeling via nonconcave penalized likelihood

  • 2001-03-01 (Thu.), 11:00 AM
  • 二樓交誼廳
  • 范 劍 青 教授
  • Department of Statistics the Chinese University of Hong Kong

Abstract

Variable selections are very fundamental to high-dimensional statistical modeling, including nonparametric regression. Many of procedures in use are stepwise selection procedures, which can be expensive in computation and ignore stochastic errors in the variable selection process of previous steps. In this talk, penalized likelihood approaches are proposed to handle these kinds of problems. The proposed methods select variables and estimate coefficients simultaneously. Hence they enable us to construct confidence intervals for estimated parameters. The proposed approaches distinguish from others in that the penalty functions are symmetric, nonconcave on (0,∞), and have singularities at the origin in order to produce sparse solutions. Further, the penalty functions should be bounded by a constant in order to reduce bias and satisfy certain conditions in order to yield continuous solutions. A new algorithm is proposed for optimizing high-dimensional nonconcave functions. The proposed ideas are widely applicable. They are readily applied to a variety of parametric models such as generalized linear models and robust regression models. They can also be easily applied to nonparametric modeling by using wavelets and splines. The proposed approaches select automatically nearly ideal subbasis to efficiently represent unknown functions. (Based joint with Run-Ze Li of Pennsylvania State University)

最後更新日期:
回頁首