报告题目: Quantile Regression Under Memory Constraint
报 告 人: 刘卫东教授(上海交通大学)
报告时间:2018年10月20日 10:00-11:00
报告地点:知新楼B-1238
报告摘要:
Thispaper studies the inference problem in quantile regression (QR) for a largesample size $n$ but under a limited memory constraint, where the memory canonly store a small batch of data of size $m$. A natural method is thena\"ive divide-and-conquer approach, which splits data into batches ofsize $m$, computes the local QR estimator for each batch, and then aggregatesthe estimators via averaging. However, this method only works when $n=o(m^2)$and is computationally expensive. This paper proposes a computationallyefficient method, which only requires an initial QR estimator on a small batchof data and then successively refines the estimator via multiple rounds ofaggregations.
Theoretically,as long as $n$ grows polynomially in $m$, we establish the asymptotic normalityfor the obtained estimator and show that our estimator with only a few roundsof aggregations achieves the same efficiency as the QR estimator computed onall the data. Moreover, our result allows the case that the dimensionality $p$ goesto infinity. The proposed method can also be applied to address the QR problemunder distributed computing environment (e.g., in a large-scale sensor network)or for real-time streaming data.
欢迎各位老师同学积极参加!