主题：HighDimensional Statistical Learning with Bayesian Variable Selection
报告人：Wenxin Jiang（Northwestern University，USA）
报告摘要：This is atheoretical study on the frequentists' convergence properties of Bayesianinference using binary logistic or probit regression, when the number ofexplanatory variables `p' is possibly much larger than the number of studyunits `n'. In a popular approach of `Bayesian Variable Selection', one uses aprior to select a limited number of candidate variables to enter the model. Weshow that this approach can induce posterior estimates of the regressionfunctions that are consistently estimating the truth, if the true regressionmodel satisfies some `sparseness condition', which indicates that most of thecandidate variables have very small effects in regression. The estimatedregression functions therefore can also produce `consistent' classifiers thatare asymptotically optimal for predicting future binary responses. Furthermore,we show in some sparse situations that the corresponding rate of convergenceresembles the convergence rate in a low dimensional setup (p < < n), evenif the actual set up is high dimensional with p>> n. Therefore, it ispossible to use Bayesian variable selection to reduce over fitting caused bythe curse of dimensionality.