主题: |
High Dimensional Statistical Learning with Bayesian Variable Selection |
类型: |
学术报告
|
主办方: |
|
报告人: |
Wenxin Jiang(Northwestern University, USA) |
日期: |
2014年9月12日下午14:30 |
地点: |
知新楼B-1248 |
内容: |
报告题目:High Dimensional Statistical Learning with Bayesian Variable Selection 报告人:Wenxin Jiang(Department of Statistics, Northwestern University, USA) 报告时间:2014年9月12日下午14:30 报告地点:知新楼B-1248 报告摘要:This is a theoretical study on the frequentists' convergence properties of Bayesian inference using binary logistic or probit regression, when the number of explanatory variables `p' is possibly much larger than the number of study units `n'. In a popular approach of `Bayesian Variable Selection', one uses a prior to select a limited number of candidate variables to enter the model. We show that this approach can induce posterior estimates of the regression functions that are consistently estimating the truth, if the true regression model satisfies some `sparseness condition', which indicates that most of the candidate variables have very small effects in regression. The estimated regression functions therefore can also produce `consistent' classifiers that are asymptotically optimal for predicting future binary responses. Furthermore, we show in some sparse situations that the corresponding rate of convergence resembles the convergence rate in a low dimensional setup (p < < n), even if the actual set up is high dimensional with p>> n. Therefore, it is possible to use Bayesian variable selection to reduce over fitting caused by the curse of dimensionality.
|