jump to main area
:::
A- A A+

Seminars

Interaction-based Learning and Prediction in Big Data

  • 2015-02-10 (Tue.), 10:30 AM
  • Recreation Hall, 2F, Institute of Statistical Science
  • Prof. Shaw-Hwa Lo
  • Department of Statistics, Columbia University

Abstract

We consider a computer intensive approach (Partition Retention (PR, 09) ), based on an earlier method (Lo and Zheng (2002) for detecting which, of many potential explanatory variables, have an influence on a dependent variable Y. This approach is suited to detect influential variables in groups, where causal effects depend on the confluence of values of several variables. It has the advantage of avoiding a difficult direct analysis, involving possibly thousands of variables, guided by a measure of influence I. We next apply PR to more challenging real data applications, typically involving complex and extremely high dimensional data. The quality of variables selected is evaluated in two ways: first by classification error rates, then by functional relevance using external biological knowledge. We demonstrate that (1) the classification error rates can be significantly reduced by considering interactions; (2) incorporating interaction information into data analysis can be very rewarding in generating novel scientific findings. Heuristic explanations why and when the proposed methods may lead to such a dramatic (classification/ predictive) gain are followed. If time permits, we tackle and resolve a scientic puzzle that highly predictive variables do not necessarily appear as highly significant, thus evading the researcher using significance-based methods. If prediction is the goal, we must lay aside significance as the only selection standard.

Update:
scroll to top