科学研究
报告题目:

Statistical Learning by Stochastic Gradient Descent

报告人:

雷云文 博士(英国伯明翰大学)

报告时间:

报告地点:

腾讯会议 ID:970 373 073

报告摘要:

Stochastic gradient descent (SGD) has become the workhorse behind many machine learning problems. Optimization and sampling errors are two contradictory factors responsible for the statistical behavior of SGD. In this talk, we report our generalization analysis of SGD by considering simultaneously the optimization and sampling errors. We remove some restrictive assumptions in the literature and significantly improve the existing generalization bounds. Our results help to understand how to stop SGD early to get a best statistical behavior.